ZzzHelloWorld commited on
Commit
bcec3ca
·
verified ·
1 Parent(s): 1350de4

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +2 -0
  2. Appendix_sudoku/ShapeGrid_sudoku.tsv +0 -0
  3. Appendix_sudoku/appendix_sudoku.parquet +3 -0
  4. VLMEvalKit-sudoku/assets/LOGO.svg +24 -0
  5. VLMEvalKit-sudoku/docs/en/Development.md +145 -0
  6. VLMEvalKit-sudoku/docs/en/_static/css/readthedocs.css +63 -0
  7. VLMEvalKit-sudoku/docs/zh-CN/.readthedocs.yaml +17 -0
  8. VLMEvalKit-sudoku/docs/zh-CN/Development.md +139 -0
  9. VLMEvalKit-sudoku/docs/zh-CN/_static/image/logo_icon.svg +31 -0
  10. VLMEvalKit-sudoku/docs/zh-CN/docutils.conf +2 -0
  11. VLMEvalKit-sudoku/docs/zh-CN/index.rst +49 -0
  12. VLMEvalKit-sudoku/llava/eval/eval_gpt_review.py +113 -0
  13. VLMEvalKit-sudoku/llava/eval/eval_science_qa_gpt4_requery.py +149 -0
  14. VLMEvalKit-sudoku/llava/eval/model_vqa_science.py +151 -0
  15. VLMEvalKit-sudoku/llava/model/builder_new.bk +306 -0
  16. VLMEvalKit-sudoku/llava/model/multimodal_encoder/__pycache__/adapt_clip_vision_model.cpython-310.pyc +0 -0
  17. VLMEvalKit-sudoku/llava/model/multimodal_encoder/__pycache__/hubconf.cpython-310.pyc +0 -0
  18. VLMEvalKit-sudoku/llava/model/multimodal_encoder/__pycache__/imagebind.cpython-310.pyc +0 -0
  19. VLMEvalKit-sudoku/llava/model/multimodal_encoder/builder.py +49 -0
  20. VLMEvalKit-sudoku/llava/model/multimodal_encoder/dev_eva_clip/eva_clip/constants.py +2 -0
  21. VLMEvalKit-sudoku/llava/model/multimodal_encoder/eva_clip/model_configs/EVA-CLIP-8B.json +27 -0
  22. VLMEvalKit-sudoku/llava/model/multimodal_encoder/eva_clip/model_configs/EVA01-CLIP-g-14.json +24 -0
  23. VLMEvalKit-sudoku/llava/model/multimodal_encoder/modeling_moonvit.py +871 -0
  24. VLMEvalKit-sudoku/llava/model/multimodal_encoder/modeling_siglip2_ps8.py +1774 -0
  25. VLMEvalKit-sudoku/scripts/visualize.ipynb +266 -0
  26. VLMEvalKit-sudoku/vlmeval/__pycache__/__init__.cpython-310.pyc +0 -0
  27. VLMEvalKit-sudoku/vlmeval/__pycache__/config.cpython-310.pyc +0 -0
  28. VLMEvalKit-sudoku/vlmeval/__pycache__/inference_mt.cpython-310.pyc +0 -0
  29. VLMEvalKit-sudoku/vlmeval/__pycache__/inference_video.cpython-310.pyc +0 -0
  30. VLMEvalKit-sudoku/vlmeval/api/bluelm_api.py +234 -0
  31. VLMEvalKit-sudoku/vlmeval/api/doubao_vl_api.py +210 -0
  32. VLMEvalKit-sudoku/vlmeval/api/gemini.py +186 -0
  33. VLMEvalKit-sudoku/vlmeval/api/taiyi.py +185 -0
  34. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/__init__.cpython-310.pyc +0 -0
  35. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/charxiv.cpython-310.pyc +0 -0
  36. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/creation.cpython-310.pyc +0 -0
  37. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/emma.cpython-310.pyc +0 -0
  38. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/gobench.cpython-310.pyc +0 -0
  39. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_caption.cpython-310.pyc +0 -0
  40. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_mt.cpython-310.pyc +0 -0
  41. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_shortqa.cpython-310.pyc +0 -0
  42. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_vqa.cpython-310.pyc +3 -0
  43. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_yorn.cpython-310.pyc +0 -0
  44. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/longvideobench.cpython-310.pyc +0 -0
  45. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/mlvu.cpython-310.pyc +0 -0
  46. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/mmalignbench.cpython-310.pyc +0 -0
  47. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/mmbench_video.cpython-310.pyc +0 -0
  48. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/mmmath.cpython-310.pyc +0 -0
  49. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/moviechat1k.cpython-310.pyc +0 -0
  50. VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/slidevqa.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -58,3 +58,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
  eval_results/GNE_ShapeGrid_sudoku.xlsx filter=lfs diff=lfs merge=lfs -text
 
 
 
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
  eval_results/GNE_ShapeGrid_sudoku.xlsx filter=lfs diff=lfs merge=lfs -text
61
+ eval_results/SBE_ShapeGrid_sudoku.xlsx filter=lfs diff=lfs merge=lfs -text
62
+ VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_vqa.cpython-310.pyc filter=lfs diff=lfs merge=lfs -text
Appendix_sudoku/ShapeGrid_sudoku.tsv ADDED
The diff for this file is too large to render. See raw diff
 
Appendix_sudoku/appendix_sudoku.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e53f67f83ccce3f3a8c58e5b3b2bba64bbc611ae3d678087efc4232768ecc1a4
3
+ size 334833741
VLMEvalKit-sudoku/assets/LOGO.svg ADDED
VLMEvalKit-sudoku/docs/en/Development.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Develop new Benchmark / MLLM
2
+
3
+ > 🛠️ How to implement a new Benchmark / VLM in VLMEvalKit?
4
+
5
+ ## Implement a new benchmark
6
+
7
+ Example PR: **Math-Vision Benchmark** ([#292](https://github.com/open-compass/VLMEvalKit/pull/292/files))
8
+
9
+ In VLMEvalKit, benchmarks are organized as dataset classes. When you try to implement a new benchmark, you can either reuse existing dataset classes (*e.g.*, You can reuse `ImageMCQDataset` when implementing a new multi-choice benchmark), or support a new dataset class. Each dataset must have the following two member functions (either reuse the one of the parent class or implement your own):
10
+
11
+ - `build_prompt(self, line)`: The function input `line` is an integer (the sample index) or a `pd.Series` object (the raw record of the sample). The function outputs a `multi-modal message`, serving as the input of an MLLM. The `multi-modal message` is an interleaved list of multi-modal messages adopting the following format (the example includes an image and a text message): `[dict(type='image', value=IMAGE_PTH), dict(type='text', value=prompt)]`.
12
+ - `evaluate(self, eval_file, **judge_kwargs)`: The function input `eval_file` is the MLLM prediction (typically in `.xlsx` format). If the benchmark requires an external LLM (typically GPT) for evaluation, then `judge_kwargs` can pass the arguments for the LLM. The function outputs the benchmark evaluation results (metrics) in the form of `dict` or `pd.DataFrame`.
13
+
14
+ We then brief the typical steps to implement a new benchmark under VLMEvalKit:
15
+
16
+ ### 1. Prepare your benchmark tsv file
17
+
18
+ Currently, we organize a benchmark as one single TSV file. During inference, the data file will be automatically downloaded from the definited `DATASET_URL` link to `$LMUData` file (default path is `$HOME/LMUData`, if not set explicitly). You can upload the prepared TSV file to a downloadable address (e.g., Huggingface) or send it to us at <opencompass@pjlab.org.cn>. We will assist in uploading the dataset to the server. You can also customize `LMUData` path in the environment variable `LMUData=/path/to/your/data`.
19
+
20
+ The contents of the TSV file consist of:
21
+
22
+ | Dataset Name \ Fields | index | image | image_path | question | hint | multi-choice<br>options | answer | category | l2-category | split |
23
+ | --------------------------------------- | ----- | ----- | ---------- | -------- | ---- | ----------------------- | ------ | -------- | ----------- | ----- |
24
+ | MMBench_DEV_[CN/EN] | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
25
+ | MMBench_TEST_[CN/EN] | ✅ | ✅ | | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ |
26
+ | CCBench | ✅ | ✅ | | ✅ | | ✅ | ✅ | ✅ | | |
27
+ | SEEDBench_IMG | ✅ | ✅ | | ✅ | | ✅ | ✅ | ✅ | | |
28
+ | MME | ✅ | ✅ | | ✅ | | | ✅ | ✅ | | |
29
+ | MMVet | ✅ | ✅ | | ✅ | | | ✅ | ✅ | | |
30
+ | MMMU_DEV_VAL | ✅ | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | ✅ |
31
+ | COCO_VAL | ✅ | ✅ | | | | | ✅ | | | |
32
+ | OCRVQA_[TEST/TESTCORE] | ✅ | ✅ | | ✅ | | | ✅ | | | |
33
+ | TextVQA_VAL | ✅ | ✅ | | ✅ | | | ✅ | | | |
34
+ | VCR_[EN/ZH]\_[EASY/HARD]\_[ALL/500/100] | ✅ | ✅ | | ✅ | | | ✅ | | | |
35
+ | MMMB_[en/cn/pt/ar/tr/ru] | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | ✅ | |✅ |
36
+ | MMBench_dev_[en/cn/pt/ar/tr/ru] | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |✅ |
37
+
38
+ <div align="center"><b>Table 1. TSV fields of supported datasets.</b></div>
39
+
40
+ **Intro to mandatory fields in the `TSV` file:**
41
+
42
+ - **index:** Integer, Unique for each line in `tsv`
43
+ - **image:** The base64 of the image, you can use APIs implemented in `vlmeval/smp/vlm.py` for encoding and decoding:
44
+ - Encoding: `encode_image_to_base64 `(for PIL Image) / `encode_image_file_to_base64` (for image file path)
45
+ - Decoding: `decode_base64_to_image`(for PIL Image) / `decode_base64_to_image_file` (for image file path)
46
+ - **question**: The question corresponding to the image, a string
47
+ - **answer**: The answer to the question, a string. The `test` split does not need this field
48
+
49
+ ### 2. Cutomize your benchmark prompt
50
+
51
+ `ImageBaseDataset` defines the default prompt format. If you need to add prompts specific to the dataset or input data in the `Interleave` format to the model, you can implement this through the `build_prompt(line)` function. This function takes a line from a TSV file as input, containing fields such as index, image, question, etc. The function returns a dictionary list of multimodal messages `msg` in the format `[dict(type='image', value=IMAGE_PTH), dict(type='text', value=prompt)]`, including the image path and the text prompt to be input into VLMs. For interleave type inputs, you can directly place the dictionary of the image path at the image token position.
52
+
53
+ ### 3. Cutomize your benchmark metrics
54
+
55
+ To add evaluation for a new benchmark, you need to customize a class object to implement the dataset’s metrics calculation. Multimodal datasets inherit from the `ImageBaseDataset` object in `vlmeval/dataset/image_base.py`. The TYPE defines the type of dataset, `DATASET_URL` is the download address of the dataset, and `DATASET_MD5` is the MD5 checksum for consistency checking of the dataset file.
56
+
57
+ In this class, **you need to implement** the `evaluate(eval_file, **judge_kwargs)` class function to calculate metrics and output results for the custom dataset. The function input `eval_file` is the path to the model prediction results file `{model_name}_{dataset}.xlsx`. This file can be read as a pandas.DataFrame using the `load(eval_file)` method, containing fields such as index, question, answer, category, prediction, etc. The judge_kwargs will pass a dictionary related to evaluation, such as the name of the `judge model`, the number of API request threads, etc. **The return value** of the function is the calculated accuracy and other metrics, formatted as a dictionary composed of lists, organized into a pandas.DataFrame.
58
+
59
+ ## Implement a new model
60
+
61
+ Example PR: **Support LLaVA-Next-Interleave** ([#294](https://github.com/open-compass/VLMEvalKit/pull/294))
62
+
63
+ **1. Support `generate_inner` API (mandatory).**
64
+
65
+ All existing models are implemented in `vlmeval/vlm`. For a minimal model, your model class **must implement the method** `generate_inner(msgs, dataset=None)`. In this function, you feed a multi-modal message to your VLM and return the VLM prediction (which is a string). The optional argument `dataset` can be used as the flag for the model to switch among various inference strategies.
66
+
67
+ The multi-modal messages `msgs` is a list of dictionaries, each dictionary has two keys: type and value:
68
+ - `type`: We currently support two types, choices are ["image", "text"].
69
+ - `value`: When type=='text' , the value is the text message (a single string); when type=='image', the value can be the local path of an image file, or the image URL.
70
+
71
+ Currently a multi-modal message may contain arbitrarily interleaved images and texts. If your model do not support that, a practice can be taking the 1st image and concatenated text messages as the input. You can set the `INTERLEAVE = False` in your model class and use `self.message_to_promptimg(message, dataset=dataset)` to build your prompt and the first image's path.
72
+
73
+ Here are some examples of multi-modal messages:
74
+
75
+ ```python
76
+ IMAGE_PTH = 'assets/apple.jpg'
77
+ IMAGE_URL = 'https://raw.githubusercontent.com/open-compass/VLMEvalKit/main/assets/apple.jpg'
78
+ msg1 = [
79
+ dict(type='image', value=IMAGE_PTH),
80
+ dict(type='text', value='What is in this image?')
81
+ ]
82
+ msg2 = [
83
+ dict(type='image', value=IMAGE_URL),
84
+ dict(type='image', value=IMAGE_URL),
85
+ dict(type='text', value='How many apples are there in these images?')
86
+ ]
87
+ response = model.generate(msg1)
88
+ ```
89
+
90
+ For convenience sake, we also support to take a list of string as inputs. In that case, we will check if a string is an image path or image URL and automatically convert it to the list[dict] format:
91
+
92
+ ```python
93
+ IMAGE_PTH = 'assets/apple.jpg'
94
+ IMAGE_URL = 'https://raw.githubusercontent.com/open-compass/VLMEvalKit/main/assets/apple.jpg'
95
+ msg1 = [IMAGE_PTH, 'What is in this image?']
96
+ msg2 = [IMAGE_URL, IMAGE_URL, 'How many apples are there in these images?']
97
+ response = model.generate(msg1)
98
+ ```
99
+
100
+ **Support Custom Prompt (optional).**
101
+
102
+ Besides, your model can support **custom prompt building** by implementing two optional methods: `use_custom_prompt(dataset)` and `build_prompt(line, dataset=None)`.
103
+
104
+ Both functions take the dataset name as the input:
105
+
106
+ - `use_custom_prompt(dataset)` returns a boolean flag, indicating whether the model should use the custom prompt building strategy.
107
+ - If `use_custom_prompt(dataset)` returns True, `build_prompt(line, dataset)` should return a customly bulit multimodal message for the corresponding `dataset`, given `line`, which is a dictionary that includes the necessary information of a data sample. If `use_custom_prompt(dataset)` returns False, the default prompt building strategy will be used.
108
+
109
+ **Support multi-turn chatting (optional).**
110
+
111
+ You can also support the multi-turn chatting and evaluation with your VLM by supporting the `chat_inner(message, dataset)` function. The function outputs a single string response, and the `message` is a list of chat history, following the below format.
112
+
113
+ ```python
114
+ # Assume msg1, msg2, msg3, ... are multi-modal messages following the previously described format
115
+ # `chat_inner` take the following chat history list as input:
116
+ message = [
117
+ dict(role='user', content=msg1),
118
+ dict(role='assistant', content=msg2),
119
+ dict(role='user', content=msg3),
120
+ dict(role='assistant', content=msg4),
121
+ ......
122
+ dict(role='user', content=msgn),
123
+ ]
124
+ # `message` should contain an odd number of chat utterances, the role of utterances should be interleaved "user" and "assistant", with the role of the last utterance to be "user".
125
+ # The chat function will call `chat_inner`
126
+ response = model.chat(message)
127
+ ```
128
+
129
+ ### Example PRs:
130
+
131
+ - VLM that doesn't support interleaved images and texts, and does not use custom prompts: [[Model] Support glm-4v-9b](https://github.com/open-compass/VLMEvalKit/pull/221)
132
+ - VLM that supports interleaved images and texts and custom prompts: [Add MiniCPM-Llama3-V-2.5](https://github.com/open-compass/VLMEvalKit/pull/205)
133
+ - VLM API: [Feature add glmv](https://github.com/open-compass/VLMEvalKit/pull/201)
134
+
135
+ ## Contribute to VLMEvalKit
136
+
137
+ If you want to contribute codes to **VLMEvalKit**, please do the pre-commit check before you submit a PR. That helps to keep the code tidy.
138
+
139
+ ```bash
140
+ # Under the directory of VLMEvalKit, install the pre-commit hook:
141
+ pip install pre-commit
142
+ pre-commit install
143
+ pre-commit run --all-files
144
+ # Then you can commit your code.
145
+ ```
VLMEvalKit-sudoku/docs/en/_static/css/readthedocs.css ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .header-logo {
2
+ background-image: url("../image/logo.svg");
3
+ background-size: 275px 80px;
4
+ height: 80px;
5
+ width: 275px;
6
+ }
7
+
8
+
9
+ @media screen and (min-width: 1100px) {
10
+ .header-logo {
11
+ top: -25px;
12
+ }
13
+ }
14
+
15
+ pre {
16
+ white-space: pre;
17
+ }
18
+
19
+ @media screen and (min-width: 2000px) {
20
+ .pytorch-content-left {
21
+ width: 1200px;
22
+ margin-left: 30px;
23
+ }
24
+ article.pytorch-article {
25
+ max-width: 1200px;
26
+ }
27
+ .pytorch-breadcrumbs-wrapper {
28
+ width: 1200px;
29
+ }
30
+ .pytorch-right-menu.scrolling-fixed {
31
+ position: fixed;
32
+ top: 45px;
33
+ left: 1580px;
34
+ }
35
+ }
36
+
37
+
38
+ article.pytorch-article section code {
39
+ padding: .2em .4em;
40
+ background-color: #f3f4f7;
41
+ border-radius: 5px;
42
+ }
43
+
44
+ /* Disable the change in tables */
45
+ article.pytorch-article section table code {
46
+ padding: unset;
47
+ background-color: unset;
48
+ border-radius: unset;
49
+ }
50
+
51
+ table.autosummary td {
52
+ width: 50%
53
+ }
54
+
55
+ img.align-center {
56
+ display: block;
57
+ margin-left: auto;
58
+ margin-right: auto;
59
+ }
60
+
61
+ article.pytorch-article p.rubric {
62
+ font-weight: bold;
63
+ }
VLMEvalKit-sudoku/docs/zh-CN/.readthedocs.yaml ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: 2
2
+
3
+ # Set the version of Python and other tools you might need
4
+ build:
5
+ os: ubuntu-22.04
6
+ tools:
7
+ python: "3.8"
8
+
9
+ formats:
10
+ - epub
11
+
12
+ sphinx:
13
+ configuration: docs/zh-CN/conf.py
14
+
15
+ python:
16
+ install:
17
+ - requirements: requirements/docs.txt
VLMEvalKit-sudoku/docs/zh-CN/Development.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🛠️ 如何在 VLMEvalKit 中实现一个新的 Benchmark 或多模态模型(VLM)
2
+
3
+ ## 实现一个新的 benchmark
4
+
5
+ 示例 PR: **添加 Math-Vision Benchmark** ([#292](https://github.com/open-compass/VLMEvalKit/pull/292/files))
6
+
7
+ 目前在 VLMEvalKit 中,benchmark 以数据集类的形式呈现,当你新增一个 benchmark 时,你可以选择复用现有的数据集类 (如单选题 benchmark 可复用 `ImageMCQDataset`),或是实现新的数据集类。你的数据集类必须支持以下两种方法 (复用父类或自行实现):
8
+
9
+ - `build_prompt(self, line)`: 方法输入 `line` 类型为 int (对应数据 index) 或 `pd.Series` (对应数据原始 record)。方法输出一条 `multi-modal message` 作为多模态模型输入,`multi-modal message` 是一个图文交错的列表,如以下格式 (一图一文): `[dict(type='image', value=IMAGE_PTH), dict(type='text', value=prompt)]`。
10
+ - `evaluate(self, eval_file, **judge_kwargs)`: 方法输入 `eval_file` 为多模态模型的预测结果 (多以 `.xlsx` 格式存在),如 benchmark evaluation 需要大语言模型 (一般为 GPT) 辅助,则 `judge_kwargs` 传入大语言模型的参数。方法输出 benchmark 的评测结果,以 `dict` 或 `pd.DataFrame` 的形式。
11
+
12
+ 以下,我们简述新增数据集的通常步骤:
13
+
14
+ ### 1. TSV 数据文件准备 (图文评测集)
15
+
16
+ 目前,我们将每一个 benchmark 数据集设置为一个单独的 TSV 文件。在推理过程中,数据文件将从数据集定义的 `DATASET_URL` 链接地址自动下载到 `$LMUData` 中(如果没有明确设置的话,默认路径是 `$HOME/LMUData`)。你可以将准备好的 TSV 文件上传到一个可下载的地址(如:huggingface),或发送给我们 <opencompass@pjlab.org.cn>,我们将帮助上传数据集到服务器中。此外,你也可以在环境变量中自定义设置下载路径 `LMUData=/path/to/your/data`。
17
+
18
+ TSV 文件中的内容组成为:
19
+
20
+ | 数据集名称 \ 字段 | index | image | image_path | question | hint | multi-choice<br>options | answer | category | l2-category | split |
21
+ | ---------------------- | ----- | ----- | ---------- | -------- | ---- | ----------------------- | ------ | -------- | ----------- | ----- |
22
+ | MMBench_DEV_[CN/EN] | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
23
+ | MMBench_TEST_[CN/EN] | ✅ | ✅ | | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ |
24
+ | CCBench | ✅ | ✅ | | ✅ | | ✅ | ✅ | ✅ | | |
25
+ | SEEDBench_IMG | ✅ | ✅ | | ✅ | | ✅ | ✅ | ✅ | | |
26
+ | MME | ✅ | ✅ | | ✅ | | | ✅ | ✅ | | |
27
+ | MMVet | ✅ | ✅ | | ✅ | | | ✅ | ✅ | | |
28
+ | MMMU_DEV_VAL | ✅ | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | ✅ |
29
+ | COCO_VAL | ✅ | ✅ | | | | | ✅ | | | |
30
+ | OCRVQA_[TEST/TESTCORE] | ✅ | ✅ | | ✅ | | | ✅ | | | |
31
+ | TextVQA_VAL | ✅ | ✅ | | ✅ | | | ✅ | | | |
32
+ | VCR_[EN/ZH]\_[EASY/HARD]_[ALL/500/100] | ✅ | ✅ | | ✅ | | | ✅ | | | |
33
+
34
+ <div align="center"><b>表 1. 支持的数据集的 TSV 字段。</b></div>
35
+
36
+ **TSV 中必须字段的介绍:**
37
+
38
+ - **index:** 一个整数,`tsv` 中每一行的唯一标识
39
+ - **image:** 图片的 base64 编码,你可以使用 `vlmeval/smp/vlm.py` 中实现的API进行编码和解码:
40
+ - 编码:`encode_image_to_base64`(对于PIL Image)/ `encode_image_file_to_base64`(对于图片文件路径)
41
+ - 解码:`decode_base64_to_image`(对于PIL Image)/ `decode_base64_to_image_file`(对于图片文件路径)
42
+ - **question:** 针对图像所提取出的问题,类型为字符串
43
+ - **answer:** 问题的答案,类型为字符串,Test 集可缺失这一字段
44
+
45
+ ### 2. 自定义数据集的 prompt 构建
46
+
47
+ `ImageBaseDataset` 定义了默认的 prompt 格式。如果需要针对数据集添加 prompt,或给模型输入 `Interleave` 的数据格式,可以通过 `build_prompt(line)` 函数实现。该函数输入为,每次给定 TSV 文件中的一行,包含 index, image, question 等内容作为 line。该函数将返回一个多模态消息 `msg` 的字典列表 `[dict(type='image', value=IMAGE_PTH), dict(type='text', value=prompt)]`,包括图片路径和将被输入到 VLMs 的文本 prompt。对于 interleave 类型输入,可以直接将图片路径的字典放置到 image token 位置。
48
+
49
+ ### 3. 自定义数据集的指标实现
50
+
51
+ 增加对 benchmark 的评测需要自定义一个该数据集的 class 对象,从而实现数据集的指标计算。图文多模态数据集均继承自 `vlmeval/dataset/image_base.py` 中的 `ImageBaseDataset` 对象。其中 `TYPE` 定义了数据集的类型;`DATASET_URL` 为数据集的下载地址;`DATASET_MD5` 为数据集文件的 md5 一致性编码检查。
52
+
53
+ 在 class 中**需要实现** `evaluate(eval_file, **judge_kwargs)` 类函数,对自定义的数据集结果进行指标计算和结果输出。函数输入 `eval_file` 为模型预测结果 `{model_name}_{dataset}.xlsx` 的路径。可以通过 `load(eval_file)` 文件将其读取为 panda.DataFrames 类型,其中包含 index, question, answer, category, prediction 等字段。`judge_kwargs` 参数将传递一个评测相关的字典,如:judge 模型的名称,api 请求线程数等。**函数的返回值**为评估完成的准确度等指标,其格式为由 list 组成的字典,并组织成 panda.DataFrames 类型。
54
+
55
+ ## 实现一个新的模型
56
+
57
+ 示例 PR: **支持 LLaVA-Next-Interleave** ([#294](https://github.com/open-compass/VLMEvalKit/pull/294))
58
+
59
+ **1. 支持 `generate_inner` API (必须)**
60
+
61
+ 现有所有的模型都在 `vlmeval/vlm` 中实现。对于一个最基本的模型,你的模型类**应该实现方法** `generate_inner(msgs, dataset=None)`。这个函数将向 VLM 输入一个多模态数据,并返回 VLM 的预测(一个字符串)。可选参数 `dataset` 可以用作模型在不同推理策略之间切换的标志。
62
+
63
+ 其中多模态消息 `msgs` 是一个字典列表,每个字典有两个键:类型和值:
64
+ - `type`:我们目前支持两种类型,选项是 ["image", "text"]。
65
+ - `value`:当类型为 `text` 时,值是文本消息(一个字符串);当类型为 `image` 时,值可以是图像文件的本地路径,或者是图像的URL。
66
+
67
+ > 目前,一个多模态消息可能包含任意交错的图像和文本。如果你的模型不支持这一点,我们推荐的做法是取第一张图像和连接的文本消息作为模型的输入。你可以在模型的 class 中设置 `INTERLEAVE = False` 并调用 `self.message_to_promptimg(message, dataset=dataset)` 函数来获取你的 prompt 和第一张图片的地址。
68
+
69
+ 一些多模态消息的例子:
70
+
71
+ ```python
72
+ IMAGE_PTH = 'assets/apple.jpg'
73
+ IMAGE_URL = 'https://raw.githubusercontent.com/open-compass/VLMEvalKit/main/assets/apple.jpg'
74
+ msg1 = [
75
+ dict(type='image', value=IMAGE_PTH),
76
+ dict(type='text', value='What is in this image?')
77
+ ]
78
+ msg2 = [
79
+ dict(type='image', value=IMAGE_URL),
80
+ dict(type='image', value=IMAGE_URL),
81
+ dict(type='text', value='How many apples are there in these images?')
82
+ ]
83
+ response = model.generate(msg1)
84
+ ```
85
+
86
+ 为了方便起见,我们还支持接受字符串列表作为输入。在这种情况下,我们将检查一个字符串是图像路径还是图像 URL,并自动将其转换为 `list[dict]` 格式:
87
+
88
+ ```python
89
+ IMAGE_PTH = 'assets/apple.jpg'
90
+ IMAGE_URL = 'https://raw.githubusercontent.com/open-compass/VLMEvalKit/main/assets/apple.jpg'
91
+ msg1 = [IMAGE_PTH, 'What is in this image?']
92
+ msg2 = [IMAGE_URL, IMAGE_URL, 'How many apples are there in these images?']
93
+ response = model.generate(msg1)
94
+ ```
95
+
96
+ **2. 支持自定义提示词构建 (可选)**
97
+
98
+ 此外,你的模型可以通过实现两个可选方法来支持自定义提示构建:`use_custom_prompt(dataset)` 和 `build_prompt(line, dataset=None)`。
99
+
100
+ - `use_custom_prompt(dataset)` 将返回一个布尔值,指示模型是否应使用自定义提示构建策略。
101
+ - 如果`use_custom_prompt(dataset)`返回 True,`build_prompt(line, dataset)` 应该为相应的数据集返回一个自定义构建的多模态消息,line 数据是一个包含数据样本所需信息的字典。如果`use_custom_prompt(dataset)` 返回False,则将使用默认的 prompt 构建策略。
102
+
103
+ **3. 支持多轮对话 (可选)**
104
+
105
+ 你可以通过支持 `chat_inner(message, dataset)` API 为你的模型新增多轮对话功能并兼容多轮对话评测。这个 API 输出一个字符串型回复,`message` 包含一个聊天记录的列表,格式如下:
106
+
107
+ ```python
108
+ # Assume msg1, msg2, msg3, ... are multi-modal messages following the previously described format
109
+ # `chat_inner` take the following chat history list as input:
110
+ message = [
111
+ dict(role='user', content=msg1),
112
+ dict(role='assistant', content=msg2),
113
+ dict(role='user', content=msg3),
114
+ dict(role='assistant', content=msg4),
115
+ ......
116
+ dict(role='user', content=msgn),
117
+ ]
118
+ # `message` should contain an odd number of chat utterances, the role of utterances should be interleaved "user" and "assistant", with the role of the last utterance to be "user".
119
+ # The chat function will call `chat_inner`
120
+ response = model.chat(message)
121
+ ```
122
+
123
+ ### 示例 PRs:
124
+
125
+ - 不支持交错的图像和文本,且不使用自定义提示的VLM:[[模型] 支持 glm-4v-9b](https://github.com/open-compass/VLMEvalKit/pull/221)
126
+ - 支持交错的图像和文本及自定义提示的VLM:[添加 MiniCPM-Llama3-V-2.5](https://github.com/open-compass/VLMEvalKit/pull/205)
127
+ - VLM API:[特征添加 glmv](https://github.com/open-compass/VLMEvalKit/pull/201)
128
+
129
+ ## 为 VLMEvalKit 贡献代码
130
+
131
+ 如果你想为 **VLMEvalKit** 贡献代码,请在提交PR之前进行预提交检查。这有助于保持代码整洁。
132
+
133
+ ```bash
134
+ # 在VLMEvalKit的目录下,安装预提交 hook:
135
+ pip install pre-commit
136
+ pre-commit install
137
+ pre-commit run --all-files
138
+ # 然后提交你的代码。
139
+ ```
VLMEvalKit-sudoku/docs/zh-CN/_static/image/logo_icon.svg ADDED
VLMEvalKit-sudoku/docs/zh-CN/docutils.conf ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [html writers]
2
+ table_style: colwidths-auto
VLMEvalKit-sudoku/docs/zh-CN/index.rst ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 欢迎来到 VLMEvalKit 中文教程!
2
+ ==========================================
3
+
4
+ VLMEvalKit 上手路线
5
+ -------------------------------
6
+
7
+ 为了用户能够快速上手,我们推荐以下流程:
8
+
9
+ - 对于想要使用 VLMEvalKit 的用户,我们推荐先阅读 开始你的第一步_ 部分来设置环境,并启动一个迷你实验熟悉流程。
10
+
11
+ - 若您想进行更多模块的自定义,例如增加数据集和模型,我们提供了 进阶教程_ 。
12
+
13
+ 我们始终非常欢迎用户的 PRs 和 Issues 来完善 VLMEvalKit!
14
+
15
+ .. _快速开始:
16
+ .. toctree::
17
+ :maxdepth: 1
18
+ :caption: 快速开始
19
+
20
+ Quickstart.md
21
+
22
+
23
+ .. .. _教程:
24
+ .. .. toctree::
25
+ .. :maxdepth: 1
26
+ .. :caption: 教程
27
+
28
+ .. user_guides/framework_overview.md
29
+
30
+ .. _进阶教程:
31
+ .. toctree::
32
+ :maxdepth: 1
33
+ :caption: 进阶教程
34
+
35
+ Development.md
36
+ ConfigSystem.md
37
+
38
+ .. .. _其他说明:
39
+ .. .. toctree::
40
+ .. :maxdepth: 1
41
+ .. :caption: 其他说明
42
+
43
+ .. notes/contribution_guide.md
44
+
45
+ 索引与表格
46
+ ==================
47
+
48
+ * :ref:`genindex`
49
+ * :ref:`search`
VLMEvalKit-sudoku/llava/eval/eval_gpt_review.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+
5
+ import openai
6
+ import tqdm
7
+ import ray
8
+ import time
9
+
10
+ NUM_SECONDS_TO_SLEEP = 3
11
+
12
+ @ray.remote(num_cpus=4)
13
+ def get_eval(content: str, max_tokens: int):
14
+ while True:
15
+ try:
16
+ response = openai.ChatCompletion.create(
17
+ model='gpt-4',
18
+ messages=[{
19
+ 'role': 'system',
20
+ 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
21
+ }, {
22
+ 'role': 'user',
23
+ 'content': content,
24
+ }],
25
+ temperature=0.2, # TODO: figure out which temperature is best for evaluation
26
+ max_tokens=max_tokens,
27
+ )
28
+ break
29
+ except openai.error.RateLimitError:
30
+ pass
31
+ except Exception as e:
32
+ print(e)
33
+ time.sleep(NUM_SECONDS_TO_SLEEP)
34
+
35
+ print('success!')
36
+ return response['choices'][0]['message']['content']
37
+
38
+
39
+ def parse_score(review):
40
+ try:
41
+ score_pair = review.split('\n')[0]
42
+ score_pair = score_pair.replace(',', ' ')
43
+ sp = score_pair.split(' ')
44
+ if len(sp) == 2:
45
+ return [float(sp[0]), float(sp[1])]
46
+ else:
47
+ print('error', review)
48
+ return [-1, -1]
49
+ except Exception as e:
50
+ print(e)
51
+ print('error', review)
52
+ return [-1, -1]
53
+
54
+
55
+ if __name__ == '__main__':
56
+ parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
57
+ parser.add_argument('-q', '--question')
58
+ # parser.add_argument('-a', '--answer')
59
+ parser.add_argument('-a', '--answer-list', nargs='+', default=[])
60
+ parser.add_argument('-r', '--rule')
61
+ parser.add_argument('-o', '--output')
62
+ parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
63
+ args = parser.parse_args()
64
+
65
+ ray.init()
66
+
67
+ f_q = open(os.path.expanduser(args.question))
68
+ f_ans1 = open(os.path.expanduser(args.answer_list[0]))
69
+ f_ans2 = open(os.path.expanduser(args.answer_list[1]))
70
+ rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
71
+
72
+ review_file = open(f'{args.output}', 'w')
73
+
74
+ js_list = []
75
+ handles = []
76
+ idx = 0
77
+ for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
78
+ # if idx == 1:
79
+ # break
80
+
81
+ ques = json.loads(ques_js)
82
+ ans1 = json.loads(ans1_js)
83
+ ans2 = json.loads(ans2_js)
84
+
85
+ category = json.loads(ques_js)['category']
86
+ if category in rule_dict:
87
+ rule = rule_dict[category]
88
+ else:
89
+ rule = rule_dict['default']
90
+ prompt = rule['prompt']
91
+ role = rule['role']
92
+ content = (f'[Question]\n{ques["text"]}\n\n'
93
+ f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
94
+ f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
95
+ f'[System]\n{prompt}\n\n')
96
+ js_list.append({
97
+ 'id': idx+1,
98
+ 'question_id': ques['question_id'],
99
+ 'answer1_id': ans1['answer_id'],
100
+ 'answer2_id': ans2['answer_id'],
101
+ 'category': category})
102
+ idx += 1
103
+ handles.append(get_eval.remote(content, args.max_tokens))
104
+ # To avoid the rate limit set by OpenAI
105
+ time.sleep(NUM_SECONDS_TO_SLEEP)
106
+
107
+ reviews = ray.get(handles)
108
+ for idx, review in enumerate(reviews):
109
+ scores = parse_score(review)
110
+ js_list[idx]['content'] = review
111
+ js_list[idx]['tuple'] = scores
112
+ review_file.write(json.dumps(js_list[idx]) + '\n')
113
+ review_file.close()
VLMEvalKit-sudoku/llava/eval/eval_science_qa_gpt4_requery.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+ import re
5
+ import random
6
+ from collections import defaultdict
7
+
8
+
9
+ def get_args():
10
+ parser = argparse.ArgumentParser()
11
+ parser.add_argument('--base-dir', type=str)
12
+ parser.add_argument('--gpt4-result', type=str)
13
+ parser.add_argument('--requery-result', type=str)
14
+ parser.add_argument('--our-result', type=str)
15
+ parser.add_argument('--output-result', type=str)
16
+ parser.add_argument('--split', type=str, default='test')
17
+ parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"])
18
+ return parser.parse_args()
19
+
20
+
21
+ def convert_caps(results):
22
+ fakecaps = []
23
+ for result in results:
24
+ image_id = result['question_id']
25
+ caption = result['text']
26
+ fakecaps.append({"image_id": int(image_id), "caption": caption})
27
+ return fakecaps
28
+
29
+
30
+ def get_pred_idx(prediction, choices, options):
31
+ """
32
+ Get the index (e.g. 2) from the prediction (e.g. 'C')
33
+ """
34
+ if prediction in options[:len(choices)]:
35
+ return options.index(prediction)
36
+ else:
37
+ return random.choice(range(len(choices)))
38
+
39
+
40
+ if __name__ == "__main__":
41
+ args = get_args()
42
+
43
+ base_dir = args.base_dir
44
+ split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split]
45
+ problems = json.load(open(os.path.join(base_dir, "problems.json")))
46
+ our_predictions = [json.loads(line) for line in open(args.our_result)]
47
+ our_predictions = {pred['question_id']: pred for pred in our_predictions}
48
+ split_problems = {idx: problems[idx] for idx in split_indices}
49
+
50
+ requery_predictions = [json.loads(line) for line in open(args.requery_result)]
51
+ requery_predictions = {pred['question_id']: pred for pred in requery_predictions}
52
+
53
+ gpt4_predictions = json.load(open(args.gpt4_result))['outputs']
54
+
55
+ results = defaultdict(lambda: 0)
56
+
57
+ sqa_results = {}
58
+ sqa_results['acc'] = None
59
+ sqa_results['correct'] = None
60
+ sqa_results['count'] = None
61
+ sqa_results['results'] = {}
62
+ sqa_results['outputs'] = {}
63
+
64
+ for prob_id, prob in split_problems.items():
65
+ if prob_id not in our_predictions:
66
+ assert False
67
+ if prob_id not in gpt4_predictions:
68
+ assert False
69
+ our_pred = our_predictions[prob_id]['text']
70
+ gpt4_pred = gpt4_predictions[prob_id]
71
+ if prob_id not in requery_predictions:
72
+ results['missing_requery'] += 1
73
+ requery_pred = "MISSING"
74
+ else:
75
+ requery_pred = requery_predictions[prob_id]['text']
76
+
77
+ pattern = re.compile(r'The answer is ([A-Z]).')
78
+ our_res = pattern.findall(our_pred)
79
+ if len(our_res) == 1:
80
+ our_answer = our_res[0] # 'A', 'B', ...
81
+ else:
82
+ our_answer = "FAILED"
83
+
84
+ requery_res = pattern.findall(requery_pred)
85
+ if len(requery_res) == 1:
86
+ requery_answer = requery_res[0] # 'A', 'B', ...
87
+ else:
88
+ requery_answer = "FAILED"
89
+
90
+ gpt4_res = pattern.findall(gpt4_pred)
91
+ if len(gpt4_res) == 1:
92
+ gpt4_answer = gpt4_res[0] # 'A', 'B', ...
93
+ else:
94
+ gpt4_answer = "FAILED"
95
+
96
+ our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options)
97
+ gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options)
98
+ requery_pred_idx = get_pred_idx(requery_answer, prob['choices'], args.options)
99
+
100
+ results['total'] += 1
101
+
102
+ if gpt4_answer == 'FAILED':
103
+ results['gpt4_failed'] += 1
104
+ if gpt4_pred_idx == prob['answer']:
105
+ results['gpt4_correct'] += 1
106
+ if our_pred_idx == prob['answer']:
107
+ results['gpt4_ourvisual_correct'] += 1
108
+ elif gpt4_pred_idx == prob['answer']:
109
+ results['gpt4_correct'] += 1
110
+ results['gpt4_ourvisual_correct'] += 1
111
+
112
+ if our_pred_idx == prob['answer']:
113
+ results['our_correct'] += 1
114
+
115
+ if requery_answer == 'FAILED':
116
+ sqa_results['results'][prob_id] = our_pred_idx
117
+ if our_pred_idx == prob['answer']:
118
+ results['requery_correct'] += 1
119
+ else:
120
+ sqa_results['results'][prob_id] = requery_pred_idx
121
+ if requery_pred_idx == prob['answer']:
122
+ results['requery_correct'] += 1
123
+ else:
124
+ print(f"""
125
+ Question ({args.options[prob['answer']]}): {our_predictions[prob_id]['prompt']}
126
+ Our ({our_answer}): {our_pred}
127
+ GPT-4 ({gpt4_answer}): {gpt4_pred}
128
+ Requery ({requery_answer}): {requery_pred}
129
+ print("=====================================")
130
+ """)
131
+
132
+ if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']:
133
+ results['correct_upperbound'] += 1
134
+
135
+ total = results['total']
136
+ print(f'Total: {total}, Our-Correct: {results["our_correct"]}, Accuracy: {results["our_correct"] / total * 100:.2f}%')
137
+ print(f'Total: {total}, GPT-4-Correct: {results["gpt4_correct"]}, Accuracy: {results["gpt4_correct"] / total * 100:.2f}%')
138
+ print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%')
139
+ print(f'Total: {total}, GPT-4-OursVisual-Correct: {results["gpt4_ourvisual_correct"]}, Accuracy: {results["gpt4_ourvisual_correct"] / total * 100:.2f}%')
140
+ print(f'Total: {total}, Requery-Correct: {results["requery_correct"]}, Accuracy: {results["requery_correct"] / total * 100:.2f}%')
141
+ print(f'Total: {total}, Correct upper: {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%')
142
+
143
+ sqa_results['acc'] = results["requery_correct"] / total * 100
144
+ sqa_results['correct'] = results["requery_correct"]
145
+ sqa_results['count'] = total
146
+
147
+ with open(args.output_result, 'w') as f:
148
+ json.dump(sqa_results, f, indent=2)
149
+
VLMEvalKit-sudoku/llava/eval/model_vqa_science.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import torch
3
+ import os
4
+ import json
5
+ from tqdm import tqdm
6
+ import shortuuid
7
+
8
+ from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
9
+ from llava.conversation import conv_templates, SeparatorStyle
10
+ from llava.model.builder import load_pretrained_model
11
+ from llava.utils import disable_torch_init
12
+ from llava.mm_utils import tokenizer_image_token, process_images, get_model_name_from_path
13
+
14
+ from PIL import Image
15
+ import math
16
+ from llava.slice_process import slice_image_minicpm, split_image, resize_image_keep_ratio
17
+
18
+
19
+ def split_list(lst, n):
20
+ """Split a list into n (roughly) equal-sized chunks"""
21
+ chunk_size = math.ceil(len(lst) / n) # integer division
22
+ return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)]
23
+
24
+
25
+ def get_chunk(lst, n, k):
26
+ chunks = split_list(lst, n)
27
+ return chunks[k]
28
+
29
+
30
+ def eval_model(args):
31
+ # Model
32
+ disable_torch_init()
33
+ model_path = os.path.expanduser(args.model_path)
34
+ model_name = get_model_name_from_path(model_path)
35
+ tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name, _args=args)
36
+
37
+ questions = json.load(open(os.path.expanduser(args.question_file), "r"))
38
+ questions = get_chunk(questions, args.num_chunks, args.chunk_idx)
39
+ answers_file = os.path.expanduser(args.answers_file)
40
+ os.makedirs(os.path.dirname(answers_file), exist_ok=True)
41
+ ans_file = open(answers_file, "w")
42
+ for i, line in enumerate(tqdm(questions)):
43
+ idx = line["id"]
44
+ question = line['conversations'][0]
45
+ qs = question['value'].replace('<image>', '').strip()
46
+ cur_prompt = qs
47
+
48
+ if 'image' in line:
49
+ image_file = line["image"]
50
+ image = Image.open(os.path.join(args.image_folder, image_file))
51
+
52
+ # image_tensor = process_images([image], image_processor, model.config)[0]
53
+ # images = image_tensor.unsqueeze(0).half().cuda()
54
+ # image_sizes = [image.size]
55
+
56
+ # adapt
57
+ # image, _, _, _ = slice_image_minicpm(
58
+ # image, max_slice_nums=7, scale_resolution=336, patch_size=14, never_split=False)
59
+ # image_sizes = [image.size]
60
+ # image = image_processor.preprocess(image, do_resize=False, do_center_crop=False,
61
+ # do_rescale=True, do_normalize=True, return_tensors='pt')['pixel_values'][0]
62
+ # images = [image.half().cuda()]
63
+
64
+ image = resize_image_keep_ratio(image, max_size=1024)
65
+ # minicpm-v
66
+ source_image, patches, best_grid, ind_tokens = slice_image_minicpm(
67
+ image, max_slice_nums=7, scale_resolution=336, patch_size=14, never_split=False)
68
+ image_sizes = [source_image.size]
69
+ processor = image_processor
70
+ if best_grid is None: #说明没有切片
71
+ source_tensors = processor.preprocess(source_image, do_resize=False, do_center_crop=False,
72
+ do_rescale=True, do_normalize=True,
73
+ return_tensors='pt')['pixel_values'] # 1, 3, abs_h, abs_w
74
+ crop_size = processor.crop_size
75
+ patch_tensors = torch.zeros(1, 3, crop_size['height'], crop_size['width'])
76
+ else:
77
+ source_tensors = processor.preprocess(source_image, do_resize=False, do_center_crop=False,
78
+ do_rescale=True, do_normalize=True,
79
+ return_tensors='pt')['pixel_values'] # 1, 3, abs_h, abs_w
80
+ patch_tensors = processor.preprocess(patches, do_resize=False, do_center_crop=False,
81
+ do_rescale=True, do_normalize=True,
82
+ return_tensors='pt')['pixel_values'] # num_slice, 3, s_h, s_w
83
+ images = [source_tensors[0].half().cuda()] # 3, h, w
84
+ patch_images = [patch_tensors.half().cuda()] # bs, 3, h, w
85
+ ind_tokens = [ind_tokens]
86
+ if getattr(model.config, 'mm_use_im_start_end', False):
87
+ qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs
88
+ else:
89
+ qs = DEFAULT_IMAGE_TOKEN + '\n' + qs
90
+ cur_prompt = '<image>' + '\n' + cur_prompt
91
+ else:
92
+ images = None
93
+ image_sizes = None
94
+ patch_images = None
95
+ ind_tokens = None
96
+
97
+ if args.single_pred_prompt:
98
+ qs = qs + '\n' + "Answer with the option's letter from the given choices directly."
99
+ cur_prompt = cur_prompt + '\n' + "Answer with the option's letter from the given choices directly."
100
+
101
+ conv = conv_templates[args.conv_mode].copy()
102
+ conv.append_message(conv.roles[0], qs)
103
+ conv.append_message(conv.roles[1], None)
104
+ prompt = conv.get_prompt()
105
+
106
+ input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
107
+
108
+ with torch.inference_mode():
109
+ output_ids = model.generate(
110
+ input_ids,
111
+ images=images,
112
+ image_sizes=image_sizes,
113
+ patch_images=patch_images,
114
+ ind_tokens=ind_tokens,
115
+ do_sample=True if args.temperature > 0 else False,
116
+ temperature=args.temperature,
117
+ num_beams=args.num_beams,
118
+ max_new_tokens=1024,
119
+ use_cache=True,
120
+ )
121
+
122
+ outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
123
+
124
+ ans_id = shortuuid.uuid()
125
+ ans_file.write(json.dumps({"question_id": idx,
126
+ "prompt": cur_prompt,
127
+ "text": outputs,
128
+ "answer_id": ans_id,
129
+ "model_id": model_name,
130
+ "metadata": {}}) + "\n")
131
+ ans_file.flush()
132
+ ans_file.close()
133
+
134
+ if __name__ == "__main__":
135
+ parser = argparse.ArgumentParser()
136
+ parser.add_argument("--model-path", type=str, default="facebook/opt-350m")
137
+ parser.add_argument("--model-base", type=str, default=None)
138
+ parser.add_argument("--image-folder", type=str, default="")
139
+ parser.add_argument("--question-file", type=str, default="tables/question.json")
140
+ parser.add_argument("--answers-file", type=str, default="answer.jsonl")
141
+ parser.add_argument("--conv-mode", type=str, default="llava_v0")
142
+ parser.add_argument("--num-chunks", type=int, default=1)
143
+ parser.add_argument("--chunk-idx", type=int, default=0)
144
+ parser.add_argument("--temperature", type=float, default=0.2)
145
+ parser.add_argument("--num_beams", type=int, default=1)
146
+ parser.add_argument("--answer-prompter", action="store_true")
147
+ parser.add_argument("--single-pred-prompt", action="store_true")
148
+ parser.add_argument("--fted_encoder", type=bool, default=True)
149
+ args = parser.parse_args()
150
+
151
+ eval_model(args)
VLMEvalKit-sudoku/llava/model/builder_new.bk ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 Haotian Liu
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+
16
+ import os
17
+ import warnings
18
+ import shutil
19
+
20
+ from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, BitsAndBytesConfig
21
+ import torch
22
+ from llava.model import *
23
+ from llava.constants import DEFAULT_IMAGE_PATCH_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN
24
+ from llava.utils import rank0_print
25
+
26
+
27
+ def load_pretrained_model(model_path, model_base, model_name, load_8bit=False, load_4bit=False, device_map="auto", torch_dtype="bfloat16",attn_implementation="flash_attention_2", customized_config=None, overwrite_config=None, **kwargs):
28
+ kwargs["device_map"] = device_map
29
+
30
+ if load_8bit:
31
+ kwargs["load_in_8bit"] = True
32
+ elif load_4bit:
33
+ kwargs["load_in_4bit"] = True
34
+ kwargs["quantization_config"] = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4")
35
+ elif torch_dtype == "float16":
36
+ kwargs["torch_dtype"] = torch.float16
37
+ elif torch_dtype == "bfloat16":
38
+ kwargs["torch_dtype"] = torch.bfloat16
39
+ else:
40
+ import pdb;pdb.set_trace()
41
+
42
+ if customized_config is not None:
43
+ kwargs["config"] = customized_config
44
+
45
+ if "multimodal" in kwargs:
46
+ if kwargs["multimodal"] is True:
47
+ is_multimodal = True
48
+ kwargs.pop("multimodal")
49
+ else:
50
+ is_multimodal = False
51
+
52
+ if "llava" in model_name.lower() or is_multimodal:
53
+ # Load LLaVA model
54
+ if "lora" in model_name.lower() and model_base is None:
55
+ warnings.warn(
56
+ "There is `lora` in model name but no `model_base` is provided. If you are loading a LoRA model, please provide the `model_base` argument. Detailed instruction: https://github.com/haotian-liu/LLaVA#launch-a-model-worker-lora-weights-unmerged."
57
+ )
58
+ if "lora" in model_name.lower() and model_base is not None:
59
+ lora_cfg_pretrained = AutoConfig.from_pretrained(model_path)
60
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
61
+ rank0_print("Loading LLaVA from base model...")
62
+ if "mixtral" in model_name.lower():
63
+ from llava.model.language_model.llava_mixtral import LlavaMixtralConfig
64
+
65
+ lora_cfg_pretrained = LlavaMixtralConfig.from_pretrained(model_path)
66
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
67
+ model = LlavaMixtralForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=lora_cfg_pretrained, attn_implementation=attn_implementation, **kwargs)
68
+ elif "mistral" in model_name.lower():
69
+ from llava.model.language_model.llava_mistral import LlavaMistralConfig
70
+
71
+ lora_cfg_pretrained = LlavaMistralConfig.from_pretrained(model_path)
72
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
73
+ model = LlavaMistralForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=lora_cfg_pretrained, attn_implementation=attn_implementation, **kwargs)
74
+ elif "gemma" in model_name.lower():
75
+ from llava.model.language_model.llava_gemma import LlavaGemmaConfig
76
+
77
+ lora_cfg_pretrained = LlavaGemmaConfig.from_pretrained(model_path)
78
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
79
+ model = LlavaGemmaForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=lora_cfg_pretrained, attn_implementation=attn_implementation, **kwargs)
80
+ else:
81
+ from llava.model.language_model.llava_llama import LlavaConfig
82
+
83
+ lora_cfg_pretrained = LlavaConfig.from_pretrained(model_path)
84
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
85
+ model = LlavaLlamaForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=lora_cfg_pretrained, attn_implementation=attn_implementation, **kwargs)
86
+
87
+ token_num, tokem_dim = model.lm_head.out_features, model.lm_head.in_features
88
+ if model.lm_head.weight.shape[0] != token_num:
89
+ model.lm_head.weight = torch.nn.Parameter(torch.empty(token_num, tokem_dim, device=model.device, dtype=model.dtype))
90
+ model.model.embed_tokens.weight = torch.nn.Parameter(torch.empty(token_num, tokem_dim, device=model.device, dtype=model.dtype))
91
+
92
+ rank0_print("Loading additional LLaVA weights...")
93
+ if os.path.exists(os.path.join(model_path, "non_lora_trainables.bin")):
94
+ non_lora_trainables = torch.load(os.path.join(model_path, "non_lora_trainables.bin"), map_location="cpu")
95
+ else:
96
+ # this is probably from HF Hub
97
+ from huggingface_hub import hf_hub_download
98
+
99
+ def load_from_hf(repo_id, filename, subfolder=None):
100
+ cache_file = hf_hub_download(repo_id=repo_id, filename=filename, subfolder=subfolder)
101
+ return torch.load(cache_file, map_location="cpu")
102
+
103
+ non_lora_trainables = load_from_hf(model_path, "non_lora_trainables.bin")
104
+ non_lora_trainables = {(k[11:] if k.startswith("base_model.") else k): v for k, v in non_lora_trainables.items()}
105
+ if any(k.startswith("model.model.") for k in non_lora_trainables):
106
+ non_lora_trainables = {(k[6:] if k.startswith("model.") else k): v for k, v in non_lora_trainables.items()}
107
+ model.load_state_dict(non_lora_trainables, strict=False)
108
+
109
+ from peft import PeftModel
110
+
111
+ rank0_print("Loading LoRA weights...")
112
+ model = PeftModel.from_pretrained(model, model_path)
113
+ rank0_print("Merging LoRA weights...")
114
+ model = model.merge_and_unload()
115
+ rank0_print("Model is loaded...")
116
+ elif model_base is not None: # this may be mm projector only, loading projector with preset language mdoel
117
+ rank0_print(f"Loading LLaVA from base model {model_base}...")
118
+ if "mixtral" in model_name.lower():
119
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
120
+ cfg_pretrained = AutoConfig.from_pretrained(model_path)
121
+ model = LlavaMixtralForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=cfg_pretrained, attn_implementation=attn_implementation, **kwargs)
122
+ elif "mistral" in model_name.lower() or "zephyr" in model_name.lower():
123
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
124
+ cfg_pretrained = AutoConfig.from_pretrained(model_path)
125
+ model = LlavaMistralForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=cfg_pretrained, attn_implementation=attn_implementation, **kwargs)
126
+ elif "gemma" in model_name.lower():
127
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
128
+ cfg_pretrained = AutoConfig.from_pretrained(model_path)
129
+ model = LlavaGemmaForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=cfg_pretrained, attn_implementation=attn_implementation, **kwargs)
130
+ elif (
131
+ "wizardlm-2" in model_name.lower()
132
+ and "vicuna" in model_name.lower()
133
+ or "llama" in model_name.lower()
134
+ or "yi" in model_name.lower()
135
+ or "nous-hermes" in model_name.lower()
136
+ or "llava-v1.6-34b" in model_name.lower()
137
+ or "llava" in model_name.lower()
138
+ ):
139
+ from llava.model.language_model.llava_llama import LlavaConfig
140
+
141
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
142
+ if customized_config is None:
143
+ llava_cfg = LlavaConfig.from_pretrained(model_path)
144
+ if "v1.5" in model_name.lower():
145
+ llava_cfg.delay_load = True # a workaround for correctly loading v1.5 models
146
+ else:
147
+ llava_cfg = customized_config
148
+
149
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
150
+ llava_cfg = LlavaConfig.from_pretrained(model_path)
151
+ model = LlavaLlamaForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=llava_cfg, **kwargs)
152
+ else:
153
+ raise ValueError(f"Model {model_name} not supported")
154
+
155
+ mm_projector_weights = torch.load(os.path.join(model_path, "mm_projector.bin"), map_location="cpu")
156
+ mm_projector_weights = {k: v.to(torch.float16) for k, v in mm_projector_weights.items()}
157
+ model.load_state_dict(mm_projector_weights, strict=False)
158
+ else:
159
+ rank0_print(f"Loaded LLaVA model: {model_path}")
160
+ if "mixtral" in model_name.lower():
161
+ from llava.model.language_model.llava_mixtral import LlavaMixtralConfig
162
+
163
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
164
+ if customized_config is None:
165
+ llava_cfg = LlavaMixtralConfig.from_pretrained(model_path)
166
+ else:
167
+ llava_cfg = customized_config
168
+
169
+ if overwrite_config is not None:
170
+ rank0_print(f"Overwriting config with {overwrite_config}")
171
+ for k, v in overwrite_config.items():
172
+ setattr(llava_cfg, k, v)
173
+
174
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
175
+ model = LlavaMixtralForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, config=llava_cfg, **kwargs)
176
+
177
+ elif "mistral" in model_name.lower() or "zephyr" in model_name.lower():
178
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
179
+ model = LlavaMistralForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, **kwargs)
180
+ elif (
181
+ "wizardlm-2" in model_name.lower()
182
+ and "vicuna" in model_name.lower()
183
+ or "llama" in model_name.lower()
184
+ or "yi" in model_name.lower()
185
+ or "nous-hermes" in model_name.lower()
186
+ or "llava-v1.6-34b" in model_name.lower()
187
+ or "llava-v1.5" in model_name.lower()
188
+ ):
189
+ from llava.model.language_model.llava_llama import LlavaConfig
190
+
191
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
192
+ if customized_config is None:
193
+ llava_cfg = LlavaConfig.from_pretrained(model_path)
194
+ if "v1.5" in model_name.lower():
195
+ llava_cfg.delay_load = True # a workaround for correctly loading v1.5 models
196
+ else:
197
+ llava_cfg = customized_config
198
+
199
+ if overwrite_config is not None:
200
+ rank0_print(f"Overwriting config with {overwrite_config}")
201
+ for k, v in overwrite_config.items():
202
+ setattr(llava_cfg, k, v)
203
+
204
+ model = LlavaLlamaForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, config=llava_cfg, **kwargs)
205
+
206
+ elif "qwen" in model_name.lower() or "quyen" in model_name.lower():
207
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
208
+ if "moe" in model_name.lower() or "A14B" in model_name.lower():
209
+ from llava.model.language_model.llava_qwen_moe import LlavaQwenMoeConfig
210
+ if overwrite_config is not None:
211
+ llava_cfg = LlavaQwenMoeConfig.from_pretrained(model_path)
212
+ rank0_print(f"Overwriting config with {overwrite_config}")
213
+ for k, v in overwrite_config.items():
214
+ setattr(llava_cfg, k, v)
215
+ model = LlavaQwenMoeForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, config=llava_cfg, **kwargs)
216
+ else:
217
+ model = LlavaQwenMoeForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, **kwargs)
218
+
219
+ else:
220
+ from llava.model.language_model.llava_qwen import LlavaQwenConfig
221
+ if overwrite_config is not None:
222
+ llava_cfg = LlavaQwenConfig.from_pretrained(model_path)
223
+ rank0_print(f"Overwriting config with {overwrite_config}")
224
+ for k, v in overwrite_config.items():
225
+ setattr(llava_cfg, k, v)
226
+ model = LlavaQwenForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, config=llava_cfg, **kwargs)
227
+ else:
228
+ model = LlavaQwenForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, **kwargs)
229
+ model.to(torch.bfloat16)
230
+ elif "gemma" in model_name.lower():
231
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
232
+ cfg_pretrained = AutoConfig.from_pretrained(model_path)
233
+ model = LlavaGemmaForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, config=cfg_pretrained, attn_implementation=attn_implementation, **kwargs)
234
+ else:
235
+ try:
236
+ from llava.model.language_model.llava_llama import LlavaConfig
237
+
238
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
239
+ if customized_config is None:
240
+ llava_cfg = LlavaConfig.from_pretrained(model_path)
241
+ if "v1.5" in model_path.lower():
242
+ llava_cfg.delay_load = True # a workaround for correctly loading v1.5 models
243
+ else:
244
+ llava_cfg = customized_config
245
+
246
+ if overwrite_config is not None:
247
+ rank0_print(f"Overwriting config with {overwrite_config}")
248
+ for k, v in overwrite_config.items():
249
+ setattr(llava_cfg, k, v)
250
+ model = LlavaLlamaForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, config=llava_cfg, **kwargs)
251
+ model.to(torch.bfloat16)
252
+ except:
253
+ raise ValueError(f"Model {model_name} not supported")
254
+
255
+ else:
256
+ # Load language model
257
+ if model_base is not None:
258
+ # PEFT model
259
+ from peft import PeftModel
260
+
261
+ tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
262
+ model = AutoModelForCausalLM.from_pretrained(model_base, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
263
+ print(f"Loading LoRA weights from {model_path}")
264
+ model = PeftModel.from_pretrained(model, model_path)
265
+ print(f"Merging weights")
266
+ model = model.merge_and_unload()
267
+ print("Convert to FP16...")
268
+ model.to(torch.float16)
269
+ else:
270
+ use_fast = False
271
+ if "mpt" in model_name.lower().replace("prompt", ""):
272
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
273
+ model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, trust_remote_code=True, **kwargs)
274
+ else:
275
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
276
+ model = AutoModelForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, **kwargs)
277
+
278
+ rank0_print(f"Model Class: {model.__class__.__name__}")
279
+ image_processor = None
280
+
281
+ if "llava" in model_name.lower() or is_multimodal:
282
+ mm_use_im_start_end = getattr(model.config, "mm_use_im_start_end", False)
283
+ mm_use_im_patch_token = getattr(model.config, "mm_use_im_patch_token", True)
284
+ if mm_use_im_patch_token:
285
+ tokenizer.add_tokens([DEFAULT_IMAGE_PATCH_TOKEN], special_tokens=True)
286
+ if mm_use_im_start_end:
287
+ tokenizer.add_tokens([DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN], special_tokens=True)
288
+ model.resize_token_embeddings(len(tokenizer))
289
+
290
+ vision_tower = model.get_vision_tower()
291
+ if not vision_tower.is_loaded:
292
+ vision_tower.load_model(device_map=device_map, model_path=model_path)
293
+ if device_map != "auto":
294
+ vision_tower.to(device="cuda", dtype=torch.float16)
295
+ image_processor = vision_tower.image_processor
296
+
297
+ if hasattr(model.config, "max_sequence_length"):
298
+ context_len = model.config.max_sequence_length
299
+ elif hasattr(model.config, "max_position_embeddings"):
300
+ context_len = model.config.max_position_embeddings
301
+ elif hasattr(model.config, "tokenizer_model_max_length"):
302
+ context_len = model.config.tokenizer_model_max_length
303
+ else:
304
+ context_len = 2048
305
+
306
+ return tokenizer, model, image_processor, context_len
VLMEvalKit-sudoku/llava/model/multimodal_encoder/__pycache__/adapt_clip_vision_model.cpython-310.pyc ADDED
Binary file (7.8 kB). View file
 
VLMEvalKit-sudoku/llava/model/multimodal_encoder/__pycache__/hubconf.cpython-310.pyc ADDED
Binary file (8.19 kB). View file
 
VLMEvalKit-sudoku/llava/model/multimodal_encoder/__pycache__/imagebind.cpython-310.pyc ADDED
Binary file (2.84 kB). View file
 
VLMEvalKit-sudoku/llava/model/multimodal_encoder/builder.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from .imagebind import ImageBindWrapper
3
+ from .open_clip_encoder import OpenCLIPVisionTower
4
+ from .hf_vision import HFVisionTower
5
+ from .siglip_encoder import SigLipVisionTower
6
+ from .modeling_siglip2 import SigLip2VisionTower
7
+ from .modeling_swin_siglip2 import NaFlexSigLip2SwinVisionTower
8
+ from .modeling_swin_siglip2_zyc import SigLip2SwinVisionTower
9
+ from .clip_encoder import CLIPVisionTower, CLIPVisionTowerS2
10
+ from .modeling_moonvit import MoonViTVisionTower
11
+ from .modeling_qwen2_5vl import Qwen2_5VLVisionTower
12
+
13
+ # from .eva_clip.eva_clip_encoder import EvaClipVisionTower
14
+ # from .dev_eva_clip.eva_vit import EvaViTWrapper
15
+
16
+
17
+ def build_vision_tower(vision_tower_cfg, **kwargs):
18
+ vision_tower = getattr(vision_tower_cfg, "mm_vision_tower", getattr(vision_tower_cfg, "vision_tower", None))
19
+ is_absolute_path_exists = os.path.exists(vision_tower)
20
+ use_s2 = getattr(vision_tower_cfg, "s2", False)
21
+
22
+ if "siglip2" in vision_tower and "swin" in vision_tower:
23
+ return SigLip2SwinVisionTower(vision_tower, vision_tower_cfg=vision_tower_cfg, **kwargs)
24
+ # return NaFlexSigLip2SwinVisionTower(vision_tower, vision_tower_cfg=vision_tower_cfg, **kwargs)
25
+ elif "siglip2" in vision_tower:
26
+ return SigLip2VisionTower(vision_tower, vision_tower_cfg=vision_tower_cfg, **kwargs)
27
+ elif "moonvit" in vision_tower:
28
+ return MoonViTVisionTower(vision_tower, vision_tower_cfg=vision_tower_cfg, **kwargs)
29
+ elif "qwen2_5vl" in vision_tower:
30
+ return Qwen2_5VLVisionTower(vision_tower, vision_tower_cfg=vision_tower_cfg, **kwargs)
31
+ elif "siglip" in vision_tower:
32
+ return SigLipVisionTower(vision_tower, vision_tower_cfg=vision_tower_cfg, **kwargs)
33
+ elif is_absolute_path_exists or vision_tower.startswith("openai") or vision_tower.startswith("laion") or "ShareGPT4V" in vision_tower:
34
+ if use_s2:
35
+ return CLIPVisionTowerS2(vision_tower, args=vision_tower_cfg, **kwargs)
36
+ else:
37
+ return CLIPVisionTower(vision_tower, args=vision_tower_cfg, **kwargs)
38
+ elif vision_tower.startswith("hf:"):
39
+ return HFVisionTower(vision_tower, args=vision_tower_cfg, **kwargs)
40
+ elif vision_tower in ["imagebind_huge"]:
41
+ return ImageBindWrapper(vision_tower, args=vision_tower_cfg, **kwargs)
42
+ elif vision_tower.startswith("open_clip_hub"):
43
+ return OpenCLIPVisionTower(vision_tower, args=vision_tower_cfg, **kwargs)
44
+ # elif "internal-eva" in vision_tower.lower() or "eva02" in vision_tower.lower():
45
+ # return EvaClipVisionTower(vision_tower, args=vision_tower_cfg, **kwargs)
46
+ # elif vision_tower in ["EVA-CLIP-8B", "EVA-CLIP-8B-plus"]:
47
+ # return EvaViTWrapper(vision_tower, args=vision_tower_cfg, **kwargs)
48
+
49
+ raise ValueError(f"Unknown vision tower: {vision_tower}")
VLMEvalKit-sudoku/llava/model/multimodal_encoder/dev_eva_clip/eva_clip/constants.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073)
2
+ OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711)
VLMEvalKit-sudoku/llava/model/multimodal_encoder/eva_clip/model_configs/EVA-CLIP-8B.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "embed_dim": 1280,
3
+ "vision_cfg": {
4
+ "image_size": 224,
5
+ "layers": 32,
6
+ "width": 4096,
7
+ "head_width": 128,
8
+ "mlp_ratio": 5,
9
+ "patch_size": 14,
10
+ "eva_model_name": "eva-clip-8b-14-x",
11
+ "drop_path_rate": 0,
12
+ "qkv_bias": false,
13
+ "xattn": true,
14
+ "postnorm": false,
15
+ "fusedLN": false,
16
+ "use_rms_norm": true
17
+ },
18
+ "text_cfg": {
19
+ "context_length": 77,
20
+ "vocab_size": 49408,
21
+ "width": 1280,
22
+ "heads": 20,
23
+ "layers": 32,
24
+ "xattn": false,
25
+ "fusedLN": false
26
+ }
27
+ }
VLMEvalKit-sudoku/llava/model/multimodal_encoder/eva_clip/model_configs/EVA01-CLIP-g-14.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "embed_dim": 1024,
3
+ "vision_cfg": {
4
+ "image_size": 224,
5
+ "layers": 40,
6
+ "width": 1408,
7
+ "head_width": 88,
8
+ "mlp_ratio": 4.3637,
9
+ "patch_size": 14,
10
+ "eva_model_name": "eva-clip-g-14-x",
11
+ "drop_path_rate": 0.4,
12
+ "xattn": true,
13
+ "fusedLN": true
14
+ },
15
+ "text_cfg": {
16
+ "context_length": 77,
17
+ "vocab_size": 49408,
18
+ "width": 768,
19
+ "heads": 12,
20
+ "layers": 12,
21
+ "xattn": false,
22
+ "fusedLN": true
23
+ }
24
+ }
VLMEvalKit-sudoku/llava/model/multimodal_encoder/modeling_moonvit.py ADDED
@@ -0,0 +1,871 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ from copy import deepcopy
3
+ from typing import Union, Tuple, Sequence, Optional, List
4
+
5
+ import torch
6
+ import torch.nn as nn
7
+ import torch.nn.functional as F
8
+ from transformers.activations import PytorchGELUTanh
9
+ from transformers.modeling_utils import PreTrainedModel
10
+ from transformers.configuration_utils import PretrainedConfig
11
+ from transformers.utils import is_flash_attn_2_available
12
+ from llava.utils import rank0_print
13
+
14
+ if is_flash_attn_2_available():
15
+ from flash_attn import flash_attn_varlen_func
16
+ else:
17
+ flash_attn_varlen_func = None
18
+
19
+ """Image processor class for KimiVL."""
20
+
21
+ import math
22
+ import numpy as np
23
+ from PIL import Image
24
+ from typing import Optional, Union
25
+
26
+ import torch
27
+ from torchvision.transforms import functional as TF
28
+ from transformers.image_utils import ImageInput, make_list_of_images, valid_images
29
+ from transformers.image_processing_utils import BaseImageProcessor, BatchFeature
30
+ from transformers.utils import TensorType
31
+
32
+ from transformers.image_utils import (
33
+ ChannelDimension,
34
+ PILImageResampling,
35
+ to_numpy_array,
36
+ )
37
+ from typing import Any, Optional, Tuple, Union, Dict
38
+ from transformers.image_processing_utils import BatchFeature, get_size_dict
39
+ from transformers.image_transforms import (
40
+ convert_to_rgb,
41
+ normalize,
42
+ rescale,
43
+ resize,
44
+ to_channel_dimension_format,
45
+ )
46
+ from functools import partial, reduce
47
+ from einops import rearrange
48
+
49
+ class MoonViTImageProcessor:
50
+ def __init__(self, image_mean=(0.5, 0.5, 0.5), image_std=(0.5, 0.5, 0.5), size=(392, 392), crop_size: Dict[str, int] = None, resample=PILImageResampling.BICUBIC, rescale_factor=1 / 255, data_format=ChannelDimension.FIRST):
51
+ crop_size = crop_size if crop_size is not None else {"height": 392, "width": 392}
52
+ crop_size = get_size_dict(crop_size, default_to_square=True, param_name="crop_size")
53
+
54
+ self.image_mean = image_mean
55
+ self.image_std = image_std
56
+ self.size = size
57
+ self.resample = resample
58
+ self.rescale_factor = rescale_factor
59
+ self.data_format = data_format
60
+ self.crop_size = crop_size
61
+
62
+ def preprocess(self, images, do_resize = True, do_center_crop = True, do_rescale = True, do_normalize = True, return_tensors = 'pt'):
63
+ if isinstance(images, Image.Image):
64
+ images = [images]
65
+ else:
66
+ # to adapt video data
67
+ images = [to_numpy_array(image) for image in images]
68
+ assert isinstance(images, list)
69
+
70
+ # do_resize=False, do_center_crop=False, do_rescale=True, do_normalize=True,
71
+
72
+ transforms = [
73
+ convert_to_rgb,
74
+ to_numpy_array
75
+ ]
76
+
77
+ if do_resize:
78
+ transforms.append(partial(resize, size=self.size, resample=self.resample, data_format=self.data_format))
79
+ if do_rescale:
80
+ transforms.append(partial(rescale, scale=self.rescale_factor, data_format=self.data_format))
81
+ if do_normalize:
82
+ transforms.append(partial(normalize, mean=self.image_mean, std=self.image_std, data_format=self.data_format))
83
+
84
+ transforms.append(partial(to_channel_dimension_format, channel_dim=self.data_format, input_channel_dim=self.data_format))
85
+
86
+ images = reduce(lambda x, f: [*map(f, x)], transforms, images)
87
+ data = {"pixel_values": images}
88
+ return BatchFeature(data=data, tensor_type=return_tensors)
89
+
90
+
91
+ class MoonViTConfig(PretrainedConfig):
92
+ model_type = "moonvit"
93
+
94
+ def __init__(
95
+ self,
96
+ patch_size: int = 14,
97
+ init_pos_emb_height: int = 64,
98
+ init_pos_emb_width: int = 64,
99
+ num_attention_heads: int = 16,
100
+ num_hidden_layers: int = 27,
101
+ hidden_size: int = 1152,
102
+ intermediate_size: int = 4304,
103
+ **kwargs,
104
+ ):
105
+ super().__init__(**kwargs)
106
+ self.patch_size = patch_size
107
+ # Positional embedding config
108
+ self.init_pos_emb_height = init_pos_emb_height
109
+ self.init_pos_emb_width = init_pos_emb_width
110
+ # Transformer config
111
+ self.num_hidden_layers = num_hidden_layers
112
+ self.num_attention_heads = num_attention_heads
113
+ self.hidden_size = hidden_size
114
+ self.intermediate_size = intermediate_size
115
+
116
+ def multihead_attention(
117
+ q: torch.Tensor,
118
+ k: torch.Tensor,
119
+ v: torch.Tensor,
120
+ q_cu_seqlens: Optional[torch.Tensor] = None,
121
+ k_cu_seqlens: Optional[torch.Tensor] = None,
122
+ ):
123
+ """Multi-head attention using flash attention 2.
124
+ Args:
125
+ q, k, v: tensor of shape (batch_size, seqlen, num_heads, head_dim),
126
+ or (tot_seqlens, num_heads, head_dim) if packing.
127
+ q_cu_seqlens (torch.Tensor): cumulative sequence lengths of q.
128
+ The first element should be 0 and the last element should be q.shape[0].
129
+ k_cu_seqlens (torch.Tensor): cumulative sequence lengths of k.
130
+ The first element should be 0 and the last element should be k.shape[0].
131
+ Returns:
132
+ output: shape (batch_size, seqlen, dim) or (tot_seqlens, dim) if packing,
133
+ where dim = num_heads * head_dim
134
+ """
135
+ # Unified format legal check
136
+ assert q.dim() == k.dim() == v.dim() == 3, "q, k, v must have 3 dims"
137
+ assert q_cu_seqlens[-1] == q.shape[0], "q_cu_seqlens must sum to q.shape[0]"
138
+ assert (
139
+ k_cu_seqlens[-1] == k.shape[0] == v.shape[0]
140
+ ), "k_cu_seqlens must sum to k.shape[0]"
141
+ assert q.dtype in [
142
+ torch.bfloat16,
143
+ torch.float16,
144
+ ], f"unsupported dtype {q.dtype} for multihead attn"
145
+
146
+ max_seqlen_q = (q_cu_seqlens[1:] - q_cu_seqlens[:-1]).max().item()
147
+ max_seqlen_k = (k_cu_seqlens[1:] - k_cu_seqlens[:-1]).max().item()
148
+ attn_out = flash_attn_varlen_func(
149
+ q,
150
+ k,
151
+ v,
152
+ q_cu_seqlens,
153
+ k_cu_seqlens,
154
+ max_seqlen_q,
155
+ max_seqlen_k,
156
+ causal=False,
157
+ )
158
+ attn_out = attn_out.flatten(start_dim=-2)
159
+
160
+ return attn_out
161
+
162
+
163
+ def sdpa_attention(
164
+ q: torch.Tensor,
165
+ k: torch.Tensor,
166
+ v: torch.Tensor,
167
+ q_cu_seqlens: Optional[torch.Tensor] = None,
168
+ k_cu_seqlens: Optional[torch.Tensor] = None,
169
+ ) -> torch.Tensor:
170
+ """SDPA attention.
171
+ Args:
172
+ q, k, v: tensor of shape (batch_size, seqlen, num_heads, head_dim),
173
+ or (tot_seqlens, num_heads, head_dim) if packing.
174
+ """
175
+ seq_length = q.shape[0]
176
+ attention_mask = torch.zeros(
177
+ [1, seq_length, seq_length], device=q.device, dtype=torch.bool
178
+ )
179
+ for i in range(1, len(q_cu_seqlens)):
180
+ attention_mask[
181
+ ...,
182
+ q_cu_seqlens[i - 1] : q_cu_seqlens[i],
183
+ q_cu_seqlens[i - 1] : q_cu_seqlens[i],
184
+ ] = True
185
+ q = q.transpose(0, 1)
186
+ k = k.transpose(0, 1)
187
+ v = v.transpose(0, 1)
188
+ attn_output = F.scaled_dot_product_attention(q, k, v, attention_mask, dropout_p=0.0)
189
+ attn_output = attn_output.transpose(0, 1)
190
+ attn_output = attn_output.reshape(seq_length, -1)
191
+ return attn_output
192
+
193
+
194
+ def eager_attention(
195
+ q: torch.Tensor,
196
+ k: torch.Tensor,
197
+ v: torch.Tensor,
198
+ q_cu_seqlens: Optional[torch.Tensor] = None,
199
+ k_cu_seqlens: Optional[torch.Tensor] = None,
200
+ ) -> torch.Tensor:
201
+ seq_length = q.shape[0]
202
+ attention_mask = torch.zeros(
203
+ [1, seq_length, seq_length], device=q.device, dtype=torch.bool
204
+ )
205
+ for i in range(1, len(q_cu_seqlens)):
206
+ attention_mask[
207
+ ...,
208
+ q_cu_seqlens[i - 1] : q_cu_seqlens[i],
209
+ q_cu_seqlens[i - 1] : q_cu_seqlens[i],
210
+ ] = True
211
+ q = q.transpose(0, 1)
212
+ k = k.transpose(0, 1)
213
+ v = v.transpose(0, 1)
214
+
215
+ attn_weight = q @ k.transpose(-2, -1) / math.sqrt(q.shape[-1])
216
+ attn_weight += attention_mask
217
+ attn_weight = torch.softmax(attn_weight, dim=-1, dtype=torch.float32).to(q.dtype)
218
+
219
+ attn_output = attn_weight @ v
220
+ attn_output = attn_output.transpose(0, 1)
221
+ attn_output = attn_output.reshape(seq_length, -1)
222
+ return attn_output
223
+
224
+
225
+ VL_VISION_ATTENTION_FUNCTIONS = {
226
+ "flash_attention_2": multihead_attention,
227
+ "sdpa": sdpa_attention,
228
+ "eager": eager_attention,
229
+ }
230
+
231
+
232
+ def _apply_rope_input_validation(x, freqs_cis):
233
+ assert x.ndim == freqs_cis.ndim + 1, (x.shape, freqs_cis.shape)
234
+ assert x.shape[:-2] == freqs_cis.shape[:-1], (x.shape, freqs_cis.shape)
235
+ assert x.shape[-1] == 2 * freqs_cis.shape[-1], (x.shape, freqs_cis.shape)
236
+ assert freqs_cis.dtype == torch.complex64, freqs_cis.dtype
237
+
238
+
239
+ def apply_rope(
240
+ xq: torch.Tensor, xk: torch.Tensor, freqs_cis: torch.Tensor
241
+ ) -> tuple[torch.Tensor, torch.Tensor]:
242
+ """
243
+ Args: (The leading dimensions of all inputs should be the same)
244
+ xq: query, tensor of shape (..., num_heads, head_dim)
245
+ xk: key, tensor of shape (..., num_heads, head_dim)
246
+ freqs_cis: tensor of shape (..., head_dim/2), dtype=torch.complex64. It contains the precomputed cis(freqs) for each position in the 2D grid.
247
+ Returns:
248
+ xq_out, xk_out: tensors of shape (..., num_heads, head_dim)
249
+ """
250
+ _apply_rope_input_validation(xq, freqs_cis)
251
+ _apply_rope_input_validation(xk, freqs_cis)
252
+
253
+ freqs_cis = freqs_cis.unsqueeze(-2) # ..., 1, head_dim/2
254
+ # ..., num_heads, head_dim/2
255
+ xq_ = torch.view_as_complex(xq.float().view(*xq.shape[:-1], -1, 2))
256
+ xk_ = torch.view_as_complex(xk.float().view(*xq.shape[:-1], -1, 2))
257
+ xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(-2) # ..., num_heads, head_dim
258
+ xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(-2) # ..., num_heads, head_dim
259
+ return xq_out.type_as(xq), xk_out.type_as(xk)
260
+
261
+
262
+ class Learnable2DInterpPosEmb(nn.Module):
263
+ def __init__(
264
+ self, height: int, width: int, dim: int, interpolation_mode: str = "bicubic"
265
+ ) -> None:
266
+ super().__init__()
267
+ self.height = height
268
+ self.width = width
269
+ self.interpolation_mode = interpolation_mode
270
+ self.weight = nn.Parameter(torch.empty(height, width, dim))
271
+ self.reset_parameters()
272
+
273
+ def reset_parameters(self):
274
+ nn.init.normal_(self.weight)
275
+
276
+ def forward(self, x, grid_hws) -> torch.Tensor:
277
+ pos_embs = []
278
+ for shape in grid_hws.tolist():
279
+ if shape == self.weight.shape[:-1]:
280
+ pos_embs.append(self.weight.flatten(end_dim=1))
281
+ else:
282
+ pos_embs.append(
283
+ F.interpolate(
284
+ self.weight.permute((2, 0, 1)).unsqueeze(0),
285
+ size=shape,
286
+ mode=self.interpolation_mode,
287
+ )
288
+ .squeeze(0)
289
+ .permute((1, 2, 0))
290
+ .flatten(end_dim=1)
291
+ )
292
+ out = x + torch.cat(pos_embs)
293
+ return out
294
+
295
+
296
+ class MoonVisionPatchEmbed(nn.Module):
297
+
298
+ def __init__(
299
+ self,
300
+ out_dim: int,
301
+ in_dim: int = 3,
302
+ patch_size: Union[int, Tuple[int, int]] = (14, 14),
303
+ pos_emb_height: int = 14,
304
+ pos_emb_width: int = 14,
305
+ ):
306
+ super().__init__()
307
+ assert isinstance(
308
+ patch_size, (int, Sequence)
309
+ ), f"Invalid patch_size type: {type(patch_size)}"
310
+ if isinstance(patch_size, int):
311
+ patch_size = (patch_size, patch_size)
312
+ assert (
313
+ len(patch_size) == 2
314
+ ), f"Expected patch_size to be a tuple of 2, got {patch_size}"
315
+ self.patch_size = patch_size
316
+
317
+ self.proj = nn.Conv2d(
318
+ in_dim, out_dim, kernel_size=patch_size, stride=patch_size
319
+ )
320
+
321
+ self.pos_emb = Learnable2DInterpPosEmb(
322
+ height=pos_emb_height, width=pos_emb_width, dim=out_dim
323
+ )
324
+
325
+ def forward(self, x, grid_hws) -> torch.Tensor:
326
+ """
327
+ Args:
328
+ x (L, Channels): input tensor
329
+ grid_hws (N, 2): grid height and width
330
+ Returns:
331
+ (L, Cout) tensor
332
+ """
333
+ x = self.proj(x).view(x.size(0), -1)
334
+ # apply positional embedding
335
+ x = self.pos_emb(x, grid_hws)
336
+ return x
337
+
338
+ class Rope2DPosEmb(nn.Module):
339
+ """2D rotary position embedding with multi-resolution support.
340
+ This class is intended to be used in the following way:
341
+ 1. Before training, create an instance of Rope2DPosEmb. This instance will hold the precomputed cis.
342
+ 2. Before each forward pass, call `get_freqs_cis_by_*` to get the `freqs_cis` tensor for this iteration.
343
+ 3. During the forward pass, pass the `freqs_cis` tensor to each attention layer, and call `apply` just before each attention operation.
344
+ The rope is shared across all attention layers and all heads.
345
+ Refs:
346
+ - RoFormer: https://arxiv.org/abs/2104.09864
347
+ - VisionLLaMA: https://arxiv.org/abs/2403.00522
348
+ - https://github.com/Meituan-AutoML/VisionLLaMA/blob/main/dit/models.py
349
+ Args:
350
+ dim (int): usually the multi-head attention dimension, should be divisible by 4 (TODO: relax this constraint if needed)
351
+ max_height (int): the maximum height of the 2D grid
352
+ max_width (int): the maximum width of the 2D grid
353
+ theta_base (float): the base of the theta
354
+ device (str): the device to store the precomputed cis
355
+ """
356
+
357
+ def __init__(self, dim: int, max_height: int, max_width: int, theta_base=10000):
358
+ super().__init__()
359
+ self.dim = dim
360
+ assert self.dim % 4 == 0, "dim must be divisible by 4"
361
+ self.max_height = max_height
362
+ self.max_width = max_width
363
+ self.theta_base = theta_base
364
+
365
+ self.freqs_cis = None
366
+
367
+ def extra_repr(self):
368
+ return f"dim={self.dim}, max_height={self.max_height}, max_width={self.max_width}, theta_base={self.theta_base}"
369
+
370
+ def _precompute_freqs_cis(self, down_scale_rate, device: torch.device) -> torch.Tensor:
371
+ """Calculate the cis(freqs) for each position in the 2D grid.
372
+ Return: complex tensor of shape (max_height, max_width, dim//2) and value:
373
+ height axis: ret[h, w, 2*i] = cis(h * theta_base**(-4*i/dim))
374
+ weight axis: ret[h, w, 2*i+1] = cis(w * theta_base**(-4*i/dim)) with (i in [0, dim//4))
375
+ note: `cis` is a mathematical notation defined by cis x = cos x + i sin x,
376
+ """
377
+ max_height = self.max_height // down_scale_rate
378
+ max_width = self.max_width // down_scale_rate
379
+
380
+ N = max_height * max_width
381
+ flat_pos = torch.arange(0, N).float().to(device)
382
+ x_pos = flat_pos % max_width
383
+ y_pos = flat_pos // max_width
384
+ dim_range = (
385
+ torch.arange(0, self.dim, 4)[: (self.dim // 4)].float().to(device)
386
+ ) # C/4
387
+ freqs = 1.0 / (self.theta_base ** (dim_range / self.dim))
388
+ x_freqs = torch.outer(x_pos, freqs).float() # N, C/4
389
+ y_freqs = torch.outer(y_pos, freqs).float() # N, C/4
390
+ x_cis = torch.polar(torch.ones_like(x_freqs), x_freqs) # N, C/4
391
+ y_cis = torch.polar(torch.ones_like(y_freqs), y_freqs) # N, C/4
392
+ # N, C/4, 2
393
+ freqs_cis = torch.cat(
394
+ [x_cis.unsqueeze(dim=-1), y_cis.unsqueeze(dim=-1)], dim=-1
395
+ )
396
+ # max_height, max_width, C/2
397
+ freqs_cis = freqs_cis.reshape(max_height, max_width, -1)
398
+ return freqs_cis
399
+
400
+ def get_freqs_cis(self, grid_hws: torch.Tensor, down_scale_rate=1) -> torch.Tensor:
401
+ """
402
+ Args:
403
+ grid_hws (torch.Tensor): grid height and width
404
+ Returns:
405
+ freqs_cis: tensor of shape (sum(t * height * width), dim//2)
406
+ """
407
+ max_height = self.max_height // down_scale_rate
408
+ max_width = self.max_width // down_scale_rate
409
+
410
+ if self.freqs_cis is None:
411
+ self.freqs_cis = self._precompute_freqs_cis(down_scale_rate, grid_hws.device)
412
+
413
+ shapes = grid_hws.tolist()
414
+ assert all(
415
+ 1 <= h <= max_height and 1 <= w <= max_width for h, w in shapes
416
+ ), (
417
+ shapes,
418
+ max_height,
419
+ max_width,
420
+ )
421
+ freqs_cis = torch.cat(
422
+ [self.freqs_cis[:h, :w].reshape(-1, self.dim // 2) for h, w in shapes],
423
+ dim=0,
424
+ )
425
+ return freqs_cis
426
+
427
+
428
+ class MLP2(nn.Module):
429
+ """
430
+ Args:
431
+ dims: [in_dim, hidden_dim, out_dim]
432
+ bias: whether to use bias in linear layer.
433
+ """
434
+
435
+ def __init__(self, dims: list[int], activation, bias=True):
436
+ super().__init__()
437
+ assert len(dims) == 3
438
+ self.fc0 = nn.Linear(dims[0], dims[1], bias=bias)
439
+ self.fc1 = nn.Linear(dims[1], dims[2], bias=bias)
440
+ self.activation = activation
441
+ for m in [self.fc0, self.fc1]:
442
+ nn.init.trunc_normal_(m.weight, std=math.sqrt(2 / m.in_features))
443
+ if m.bias is not None:
444
+ nn.init.zeros_(m.bias)
445
+
446
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
447
+ x = self.fc0(x)
448
+ x = self.activation(x)
449
+ return self.fc1(x)
450
+
451
+ ###### Merger layer ######
452
+ class PatchMergingLayer(nn.Module):
453
+ def __init__(self, embed_dim, enable_merging=True, merging_method="avg_pooling", norm_layer=nn.LayerNorm):
454
+ """
455
+ :param embed_dim: Transformer token 的嵌入维度
456
+ :param enable_merging: 是否启用 token 合并功能
457
+ :param merging_method: 选择 'mlp' 或 'avg_pooling' 作为合并方式
458
+ """
459
+ super().__init__()
460
+ self.enable_merging = enable_merging
461
+ self.merging_method = merging_method
462
+ self.zero_init_fc = nn.Linear(embed_dim, embed_dim, bias=False)
463
+ if self.merging_method == 'avg_pooling':
464
+ pass
465
+ elif self.merging_method == 'm_pooling':
466
+ self.attn_layer = nn.Sequential(
467
+ nn.Linear(embed_dim * 2, embed_dim),
468
+ nn.GELU(),
469
+ nn.Linear(embed_dim, embed_dim)
470
+ )
471
+ self.num_head = 16
472
+
473
+ def forward(self, x, cu_seqlens, spatial_shapes):
474
+ if not self.enable_merging:
475
+ return x, cu_seqlens
476
+ cu_seqlens_out = cu_seqlens.clone() # (N+1, )
477
+ feature_x = x
478
+ x_i_list = []
479
+ for i in range(1, len(cu_seqlens)):
480
+ start_idx = cu_seqlens[i-1].item()
481
+ end_idx = cu_seqlens[i].item()
482
+ x_i = x[start_idx:end_idx, :]
483
+ h, w = spatial_shapes[i-1]
484
+ x_i = x_i.view(h, w, -1) # (h, w, embed_dim)
485
+
486
+ if self.merging_method == 'avg_pooling':
487
+ x_i = rearrange(x_i, 'h w c -> c h w')
488
+ x_i = F.avg_pool2d(x_i, kernel_size=2, stride=2)
489
+ x_i = rearrange(x_i, 'c h w -> (h w) c')
490
+ elif self.merging_method == 'm_pooling':
491
+ x_i = rearrange(x_i, '(h p1) (w p2) c -> (h w) (p1 p2) c', p1=2, p2=2)
492
+ pooled_x_i = x_i.mean(-2, keepdim=True).expand(-1, 4, -1)
493
+ fused_x_i = torch.cat([x_i, pooled_x_i], dim=-1)
494
+ attn_logits = self.attn_layer(fused_x_i)
495
+ # multi-head attn
496
+ attn_logits = rearrange(attn_logits, 'n s (m d) -> n m s d', m=self.num_head)
497
+ attn_weights = F.softmax(attn_logits, dim=-2)
498
+ attn_weights = rearrange(attn_weights, 'n m s d -> n s (m d)')
499
+ # multi-head attn
500
+ x_i = (x_i * attn_weights).sum(-2)
501
+
502
+ x_i_list.append(x_i)
503
+ cu_seqlens_out[i] = cu_seqlens_out[i-1] + x_i.shape[0]
504
+ x = torch.cat(x_i_list, dim=0) # (L, embed_dim)
505
+ return x, cu_seqlens_out, spatial_shapes//2, feature_x
506
+
507
+ class MoonVitEncoderLayer(nn.Module):
508
+
509
+ def __init__(
510
+ self,
511
+ layer_idx: int,
512
+ num_heads: int,
513
+ hidden_dim: int,
514
+ mlp_dim: int,
515
+ *,
516
+ attn_implementation: str = "eager",
517
+ activation=F.gelu,
518
+ attn_bias: bool = False,
519
+ enable_merging: bool = False,
520
+ merging_method: str = "avg_pooling",
521
+ merger_layer_index: List[int] = None,
522
+ ):
523
+ super().__init__()
524
+ self.num_heads = num_heads
525
+ self.hidden_dim = hidden_dim
526
+ self.hidden_size_per_attention_head = self.hidden_dim // self.num_heads
527
+ self.attn_implementation = attn_implementation
528
+
529
+ self.norm0 = nn.LayerNorm(hidden_dim)
530
+ self.norm1 = nn.LayerNorm(hidden_dim)
531
+ self.mlp = MLP2([hidden_dim, mlp_dim, hidden_dim], activation)
532
+ self.wqkv = nn.Linear(hidden_dim, hidden_dim * 3, bias=attn_bias)
533
+ self.wo = nn.Linear(hidden_dim, hidden_dim, bias=attn_bias)
534
+
535
+ if merger_layer_index is not None and layer_idx in merger_layer_index:
536
+ self.merger = PatchMergingLayer(
537
+ embed_dim=hidden_dim,
538
+ enable_merging=enable_merging,
539
+ merging_method=merging_method,
540
+ )
541
+ else:
542
+ self.merger = None
543
+
544
+ def attention_qkvpacked(
545
+ self,
546
+ x: torch.Tensor,
547
+ cu_seqlens: torch.Tensor,
548
+ rope_freqs_cis: Optional[torch.Tensor] = None,
549
+ ):
550
+ """
551
+ Args:
552
+ x (torch.Tensor): (batch_size, seqlen, hidden_dim)
553
+ cu_seqlens (torch.Tensor):
554
+ """
555
+ xqkv = self.wqkv(x)
556
+
557
+ qkv_shape = xqkv.size()[:-1] + (
558
+ 3,
559
+ self.num_heads,
560
+ self.hidden_size_per_attention_head,
561
+ )
562
+ # xqkv: (batch_size, seqlen, 3, nheads, headdim)
563
+ xqkv = xqkv.view(*qkv_shape)
564
+ xq, xk, xv = torch.unbind(xqkv, dim=-3)
565
+
566
+ xq, xk = apply_rope(xq, xk, rope_freqs_cis)
567
+
568
+ attn_func = VL_VISION_ATTENTION_FUNCTIONS[self.attn_implementation]
569
+ attn_out = attn_func(
570
+ xq, xk, xv, q_cu_seqlens=cu_seqlens, k_cu_seqlens=cu_seqlens
571
+ )
572
+
573
+ attn_out = self.wo(attn_out)
574
+ return attn_out
575
+
576
+ def forward(
577
+ self,
578
+ hidden_states: torch.Tensor,
579
+ cu_seqlens: torch.Tensor,
580
+ rope_freqs_cis: Union[torch.Tensor, None] = None,
581
+ spatial_shapes: Optional[torch.Tensor] = None,
582
+ ) -> torch.Tensor:
583
+ """
584
+ Args:
585
+ hidden_states: non-packed (B, N, D) or packed (L, D). if non-packed, seqlens should be None, if packed, seqlens should be set
586
+ Returns:
587
+ output: same shape of input, non-packed (B, N, D) for non-packed input, (L, D) for packed input
588
+ """
589
+ residual = hidden_states
590
+ hidden_states = self.norm0(hidden_states)
591
+ attn_out = self.attention_qkvpacked(
592
+ hidden_states, cu_seqlens, rope_freqs_cis=rope_freqs_cis
593
+ )
594
+ hidden_states = residual + attn_out
595
+
596
+ residual = hidden_states
597
+ hidden_states = self.mlp(self.norm1(hidden_states))
598
+ hidden_states = residual + hidden_states
599
+
600
+ if self.merger is not None:
601
+ hidden_states, cu_seqlens, spatial_shapes, feature_x = self.merger(
602
+ hidden_states, cu_seqlens, spatial_shapes
603
+ )
604
+ outputs = (hidden_states, cu_seqlens, spatial_shapes, feature_x)# return the feature_x for later use
605
+ else:
606
+ outputs = (hidden_states, cu_seqlens)
607
+
608
+ return outputs
609
+
610
+ class FusedLayer(nn.Module):
611
+ def __init__(self, dim, down_scale_times):
612
+ super().__init__()
613
+ self.dim = dim
614
+ self.down_scale_times = down_scale_times
615
+ self.predictor = nn.ModuleList([nn.Sequential(
616
+ nn.Linear(dim*2, dim),
617
+ nn.GELU(),
618
+ nn.Linear(dim, dim),
619
+ ) for _ in range(down_scale_times)])
620
+ self.ln_list = nn.ModuleList([nn.LayerNorm(dim) for _ in range(down_scale_times)])
621
+
622
+ def forward(self, hidden_states, feature_x_list, spatial_shapes, use_fused_layer=True):
623
+ if not use_fused_layer:
624
+ return hidden_states
625
+ else:
626
+ fused_features = []
627
+ cur_idx = [0 for i in range(self.down_scale_times)]
628
+ for batch_idx, spatial_shape in enumerate(spatial_shapes):
629
+ cur_h = spatial_shape[0]
630
+ cur_w = spatial_shape[1]
631
+ cur_new_feature_x = []
632
+ for down_scale_idx, feature_x in enumerate(feature_x_list):
633
+ down_scale_rate = (self.down_scale_times - down_scale_idx) * 2
634
+ feature_x_h = down_scale_rate * cur_h
635
+ feature_x_w = down_scale_rate * cur_w
636
+ start_idx = cur_idx[down_scale_idx]
637
+ end_idx = start_idx + feature_x_h * feature_x_w
638
+ new_feature_x = feature_x[start_idx:end_idx, :]
639
+ new_feature_x = rearrange(new_feature_x, '(h w) d -> h w d', h=feature_x_h, w=feature_x_w)
640
+ new_feature_x = rearrange(new_feature_x, '(cur_h p1) (cur_w p2) d -> (cur_h cur_w) (p1 p2) d', cur_h=cur_h, cur_w=cur_w)
641
+ pooled_feature_x = new_feature_x.mean(-2, keepdim=True).expand(-1, down_scale_rate**2, -1)
642
+ fused_feature_x = torch.cat([new_feature_x, pooled_feature_x], dim=-1)
643
+ score = self.predictor[down_scale_idx](fused_feature_x)
644
+ normalized_score = F.softmax(score, dim=-2)
645
+ new_feature_x = (new_feature_x * normalized_score).sum(dim=-2)
646
+ new_feature_x = self.ln_list[down_scale_idx](new_feature_x)
647
+ cur_new_feature_x.append(new_feature_x)
648
+ cur_idx[down_scale_idx] = end_idx
649
+
650
+ cur_new_feature_x = torch.stack(cur_new_feature_x, dim=0)
651
+ fused_features.append(cur_new_feature_x)
652
+ assert cur_idx[0] == feature_x_list[0].shape[0] and cur_idx[1] == feature_x_list[1].shape[0], f"cur_idx: {cur_idx}"
653
+ return (hidden_states, fused_features)
654
+
655
+ class MoonVitEncoder(nn.Module):
656
+
657
+ def __init__(
658
+ self,
659
+ hidden_dim: int,
660
+ num_layers: int,
661
+ block_cfg: dict,
662
+ use_fused_layer: bool = False,
663
+ ) -> None:
664
+ super().__init__()
665
+
666
+ self.rope_2d = Rope2DPosEmb(
667
+ block_cfg["hidden_dim"] // block_cfg["num_heads"], 512, 512
668
+ )
669
+ self.blocks = nn.ModuleList(
670
+ [MoonVitEncoderLayer(layer_idx=i, **block_cfg) for i in range(num_layers)]
671
+ )
672
+ self.final_layernorm = nn.LayerNorm(hidden_dim)
673
+ self.use_fused_layer = use_fused_layer
674
+ if self.use_fused_layer:
675
+ self.fused_layer = FusedLayer(hidden_dim, len(block_cfg["merger_layer_index"]))
676
+
677
+ def forward(
678
+ self, hidden_states: torch.Tensor, grid_hws: torch.Tensor
679
+ ) -> torch.Tensor:
680
+ rope_freqs_cis = self.rope_2d.get_freqs_cis(grid_hws=grid_hws)
681
+
682
+ lengths = torch.cat(
683
+ (
684
+ torch.zeros(1, device=hidden_states.device, dtype=grid_hws.dtype),
685
+ grid_hws[:, 0] * grid_hws[:, 1],
686
+ )
687
+ )
688
+ cu_seqlens = lengths.cumsum(dim=0, dtype=torch.int32)
689
+ down_scale_rate = 1
690
+ feature_x_list = []
691
+ for _, block in enumerate(self.blocks):
692
+ layer_outputs = block(
693
+ hidden_states, cu_seqlens, rope_freqs_cis=rope_freqs_cis, spatial_shapes=grid_hws
694
+ )
695
+ if len(layer_outputs) > 2:
696
+ down_scale_rate *= 2
697
+ hidden_states, cu_seqlens, grid_hws, feature_x = layer_outputs
698
+ rope_freqs_cis = self.rope_2d.get_freqs_cis(grid_hws=grid_hws, down_scale_rate=down_scale_rate)
699
+ feature_x_list.append(feature_x)
700
+ else:
701
+ hidden_states, cu_seqlens = layer_outputs
702
+
703
+ hidden_states = self.final_layernorm(hidden_states)
704
+ if len(feature_x_list) > 0 and self.use_fused_layer:
705
+ hidden_states = self.fused_layer(hidden_states, feature_x_list, grid_hws)
706
+ return hidden_states, grid_hws
707
+
708
+
709
+ class MoonVitPretrainedModel(PreTrainedModel):
710
+ config_class = MoonViTConfig
711
+ model_type = "moonvit"
712
+ _no_split_modules = ["PackingTransformer"]
713
+ _supports_flash_attn_2 = True
714
+ _supports_sdpa = True
715
+
716
+ def __init__(self, config: MoonViTConfig, *inputs, **kwargs):
717
+ super().__init__(config, *inputs, **kwargs)
718
+ config = deepcopy(config)
719
+ self.patch_size = config.patch_size
720
+ self.patch_embed = MoonVisionPatchEmbed(
721
+ out_dim=config.hidden_size,
722
+ patch_size=config.patch_size,
723
+ pos_emb_height=config.init_pos_emb_height,
724
+ pos_emb_width=config.init_pos_emb_width,
725
+ )
726
+
727
+ config._attn_implementation = "sdpa" if not hasattr(config, "use_flash_attention_2") else "flash_attention_2"
728
+ merger_layer_index = None
729
+ if hasattr(config, "vision_config"):
730
+ if hasattr(config.vision_config, "merger_layer_index"):
731
+ merger_layer_index = config.vision_config.merger_layer_index
732
+ merging_method = config.vision_config.merging_method
733
+ use_fused_layer = getattr(config.vision_config, "use_fused_layer", False)
734
+ else:
735
+ if hasattr(config, "merger_layer_index"):
736
+ merger_layer_index = config.merger_layer_index
737
+ merging_method = config.merging_method
738
+ use_fused_layer = getattr(config, "use_fused_layer", False)
739
+
740
+ if merger_layer_index is not None:
741
+ enable_merging = True
742
+ merging_method = merging_method if merging_method is not None else "avg_pooling"
743
+ else:
744
+ enable_merging = False
745
+ merging_method = None
746
+
747
+ self.encoder = MoonVitEncoder(
748
+ hidden_dim=config.hidden_size,
749
+ num_layers=config.num_hidden_layers,
750
+ block_cfg={
751
+ "num_heads": config.num_attention_heads,
752
+ "hidden_dim": config.hidden_size,
753
+ "mlp_dim": config.intermediate_size,
754
+ "activation": PytorchGELUTanh(),
755
+ "attn_bias": True,
756
+ "attn_implementation": config._attn_implementation,
757
+ "enable_merging": enable_merging,
758
+ "merging_method": merging_method,
759
+ "merger_layer_index": merger_layer_index,
760
+ },
761
+ use_fused_layer=use_fused_layer
762
+ )
763
+
764
+ def forward(
765
+ self, pixel_values: torch.Tensor, grid_hws: torch.Tensor
766
+ ) -> torch.Tensor:
767
+ """
768
+ Args:
769
+ pixel_values (torch.Tensor): The input pixel values.
770
+ grid_hws (torch.Tensor): The grid height and width.
771
+ Returns:
772
+ torch.Tensor: The output tokens.
773
+ """
774
+ hidden_states = self.patch_embed(pixel_values, grid_hws)
775
+ hidden_states, grid_hws = self.encoder(hidden_states, grid_hws)
776
+ return hidden_states, grid_hws
777
+
778
+ class MoonViTVisionTower(nn.Module):
779
+ def __init__(self, vision_tower, vision_tower_cfg, delay_load=False):
780
+ super().__init__()
781
+
782
+ self.is_loaded = False
783
+
784
+ self.config = MoonViTConfig()
785
+
786
+ self.vision_tower_name = vision_tower
787
+
788
+ self.image_processor = MoonViTImageProcessor()
789
+
790
+ if not delay_load:
791
+ rank0_print(f"Loading vision tower: {vision_tower}")
792
+ self.load_model()
793
+ elif getattr(vision_tower_cfg, "unfreeze_mm_vision_tower", False):
794
+ rank0_print(f"The checkpoint seems to contain `vision_tower` weights: `unfreeze_mm_vision_tower`: True.")
795
+ self.load_model()
796
+ elif hasattr(vision_tower_cfg, "mm_tunable_parts") and "mm_vision_tower" in vision_tower_cfg.mm_tunable_parts:
797
+ rank0_print(f"The checkpoint seems to contain `vision_tower` weights: `mm_tunable_parts` contains `mm_vision_tower`.")
798
+ self.load_model()
799
+ else:
800
+ self.cfg_only = self.config
801
+
802
+ def load_model(self, device_map=None):
803
+ if self.is_loaded:
804
+ rank0_print("{} is already loaded, `load_model` called again, skipping.".format(self.vision_tower_name))
805
+ return
806
+
807
+ self.vision_tower = MoonVitPretrainedModel.from_pretrained(self.vision_tower_name, device_map=device_map)
808
+ print('moonvit')
809
+ self.vision_tower.requires_grad_(False)
810
+ self.is_loaded = True
811
+
812
+ def forward(self, images, patch_sizes):
813
+ pixel_values = []
814
+ for idx, image in enumerate(images):
815
+ if not valid_images(image):
816
+ raise ValueError("Invalid image input. Please provide a valid image.")
817
+ C, H, W = image.shape
818
+ patches = rearrange(image, "c (h p1) (w p2) -> h w c p1 p2", h=patch_sizes[idx][0], w=patch_sizes[idx][1])
819
+ patches = rearrange(patches, "h w c p1 p2 -> (h w) c p1 p2") # (L, C, p1, p2)
820
+ pixel_values.append(patches)
821
+ pixel_values = torch.concat(pixel_values, dim=0) # (L*, C, p1, p2)
822
+ grid_hws = torch.tensor([tuple(patch_size) for patch_size in patch_sizes], device=pixel_values.device) # (N, 2)
823
+ image_features, grid_hws = self.vision_tower(pixel_values, grid_hws)
824
+ feature_x_list = None
825
+ if isinstance(image_features, tuple):
826
+ image_features, feature_x_list = image_features
827
+ output_features = []
828
+ offset = 0
829
+ for grid_hw in grid_hws:
830
+ h, w = grid_hw
831
+ num_tokens = h * w
832
+ output_features.append(image_features[offset : offset + num_tokens].unsqueeze(0)) # (1, num_tokens, hidden_size)
833
+ offset += num_tokens
834
+
835
+ assert offset == image_features.shape[0], \
836
+ f"Used {offset} tokens, but image_features has {image_features.shape[0]} tokens!"
837
+ if feature_x_list is not None:
838
+ output_features = list(zip(output_features, feature_x_list))
839
+ return output_features
840
+
841
+
842
+ @property
843
+ def dummy_feature(self):
844
+ return torch.zeros(1, self.hidden_size, device=self.device, dtype=self.dtype)
845
+
846
+ @property
847
+ def dtype(self):
848
+ for p in self.vision_tower.parameters():
849
+ return p.dtype
850
+
851
+ @property
852
+ def device(self):
853
+ for p in self.vision_tower.parameters():
854
+ return p.device
855
+
856
+ @property
857
+ def hidden_size(self):
858
+ return self.config.hidden_size
859
+
860
+ @property
861
+ def num_patches(self):
862
+ return (self.config.image_size // self.config.patch_size) ** 2
863
+
864
+ @property
865
+ def num_patches_per_side(self):
866
+ return self.config.image_size // self.config.patch_size
867
+ # return self.model_config["vision_cfg"]["image_size"] // self.model_config["vision_cfg"]["patch_size"]
868
+
869
+ @property
870
+ def image_size(self):
871
+ return self.config.image_size
VLMEvalKit-sudoku/llava/model/multimodal_encoder/modeling_siglip2_ps8.py ADDED
@@ -0,0 +1,1774 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from src/transformers/models/siglip2/modular_siglip2.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_siglip2.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2025 The HuggingFace Inc. team.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+ import math
22
+ import warnings
23
+ from dataclasses import dataclass
24
+ from functools import partial, reduce
25
+ import torch.utils.checkpoint
26
+ from PIL import Image
27
+ from typing import Any, Optional, Tuple, Union, Dict
28
+ import os
29
+
30
+ import numpy as np
31
+ import torch
32
+ import torch.nn as nn
33
+ import torch.nn.functional as F
34
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
35
+ from torch.nn.init import _calculate_fan_in_and_fan_out
36
+
37
+ from transformers.activations import ACT2FN
38
+ from transformers.modeling_attn_mask_utils import _prepare_4d_attention_mask
39
+ from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling, ImageClassifierOutput
40
+ from transformers.modeling_utils import PreTrainedModel
41
+ from transformers.utils import (
42
+ ModelOutput,
43
+ add_start_docstrings,
44
+ add_start_docstrings_to_model_forward,
45
+ is_flash_attn_2_available,
46
+ is_flash_attn_greater_or_equal_2_10,
47
+ logging,
48
+ replace_return_docstrings,
49
+ )
50
+ from transformers.configuration_utils import PretrainedConfig
51
+ from transformers.image_processing_utils import BatchFeature, get_size_dict
52
+ from transformers.image_transforms import (
53
+ convert_to_rgb,
54
+ normalize,
55
+ rescale,
56
+ resize,
57
+ to_channel_dimension_format,
58
+ )
59
+ from transformers.image_utils import (
60
+ ChannelDimension,
61
+ PILImageResampling,
62
+ to_numpy_array,
63
+ )
64
+ from transformers.activations import ACT2FN
65
+ from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling
66
+ from transformers.modeling_utils import PreTrainedModel
67
+ from transformers import PretrainedConfig
68
+ from transformers.utils import ModelOutput
69
+ from llava.utils import rank0_print
70
+ from einops import rearrange
71
+
72
+ if is_flash_attn_2_available():
73
+ from transformers.modeling_flash_attention_utils import _flash_attention_forward
74
+
75
+
76
+ class SigLipImageProcessor:
77
+ def __init__(self, image_mean=(0.5, 0.5, 0.5), image_std=(0.5, 0.5, 0.5), size=(384, 384), crop_size: Dict[str, int] = None, resample=PILImageResampling.BICUBIC, rescale_factor=1 / 255, data_format=ChannelDimension.FIRST):
78
+ crop_size = crop_size if crop_size is not None else {"height": 384, "width": 384}
79
+ crop_size = get_size_dict(crop_size, default_to_square=True, param_name="crop_size")
80
+
81
+ self.image_mean = image_mean
82
+ self.image_std = image_std
83
+ self.size = size
84
+ self.resample = resample
85
+ self.rescale_factor = rescale_factor
86
+ self.data_format = data_format
87
+ self.crop_size = crop_size
88
+
89
+ def preprocess(self, images, do_resize = True, do_center_crop = True, do_rescale = True, do_normalize = True, return_tensors = 'pt'):
90
+ if isinstance(images, Image.Image):
91
+ images = [images]
92
+ else:
93
+ # to adapt video data
94
+ images = [to_numpy_array(image) for image in images]
95
+ assert isinstance(images, list)
96
+
97
+ # do_resize=False, do_center_crop=False, do_rescale=True, do_normalize=True,
98
+
99
+ transforms = [
100
+ convert_to_rgb,
101
+ to_numpy_array
102
+ ]
103
+
104
+ if do_resize:
105
+ transforms.append(partial(resize, size=self.size, resample=self.resample, data_format=self.data_format))
106
+ if do_rescale:
107
+ transforms.append(partial(rescale, scale=self.rescale_factor, data_format=self.data_format))
108
+ if do_normalize:
109
+ transforms.append(partial(normalize, mean=self.image_mean, std=self.image_std, data_format=self.data_format))
110
+
111
+ transforms.append(partial(to_channel_dimension_format, channel_dim=self.data_format, input_channel_dim=self.data_format))
112
+
113
+ images = reduce(lambda x, f: [*map(f, x)], transforms, images)
114
+ data = {"pixel_values": images}
115
+ return BatchFeature(data=data, tensor_type=return_tensors)
116
+
117
+
118
+ class Siglip2TextConfig(PretrainedConfig):
119
+ r"""
120
+ This is the configuration class to store the configuration of a [`Siglip2TextModel`]. It is used to instantiate a
121
+ Siglip2 text encoder according to the specified arguments, defining the model architecture. Instantiating a
122
+ configuration with the defaults will yield a similar configuration to that of the text encoder of the Siglip2
123
+ [google/siglip2-base-patch16-224](https://huggingface.co/google/siglip2-base-patch16-224) architecture.
124
+
125
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
126
+ documentation from [`PretrainedConfig`] for more information.
127
+
128
+ Args:
129
+ vocab_size (`int`, *optional*, defaults to 32000):
130
+ Vocabulary size of the Siglip2 text model. Defines the number of different tokens that can be represented by
131
+ the `inputs_ids` passed when calling [`Siglip2Model`].
132
+ hidden_size (`int`, *optional*, defaults to 768):
133
+ Dimensionality of the encoder layers and the pooler layer.
134
+ intermediate_size (`int`, *optional*, defaults to 3072):
135
+ Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
136
+ num_hidden_layers (`int`, *optional*, defaults to 12):
137
+ Number of hidden layers in the Transformer encoder.
138
+ num_attention_heads (`int`, *optional*, defaults to 12):
139
+ Number of attention heads for each attention layer in the Transformer encoder.
140
+ max_position_embeddings (`int`, *optional*, defaults to 64):
141
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
142
+ just in case (e.g., 512 or 1024 or 2048).
143
+ hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
144
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
145
+ `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
146
+ layer_norm_eps (`float`, *optional*, defaults to 1e-06):
147
+ The epsilon used by the layer normalization layers.
148
+ attention_dropout (`float`, *optional*, defaults to 0.0):
149
+ The dropout ratio for the attention probabilities.
150
+ pad_token_id (`int`, *optional*, defaults to 1):
151
+ The id of the padding token in the vocabulary.
152
+ bos_token_id (`int`, *optional*, defaults to 49406):
153
+ The id of the beginning-of-sequence token in the vocabulary.
154
+ eos_token_id (`int`, *optional*, defaults to 49407):
155
+ The id of the end-of-sequence token in the vocabulary.
156
+ projection_size (`int`, *optional*, defaults to `hidden_size`):
157
+ The size of the projection head.
158
+
159
+ Example:
160
+
161
+ ```python
162
+ >>> from transformers import Siglip2TextConfig, Siglip2TextModel
163
+
164
+ >>> # Initializing a Siglip2TextConfig with google/siglip2-base-patch16-224 style configuration
165
+ >>> configuration = Siglip2TextConfig()
166
+
167
+ >>> # Initializing a Siglip2TextModel (with random weights) from the google/siglip2-base-patch16-224 style configuration
168
+ >>> model = Siglip2TextModel(configuration)
169
+
170
+ >>> # Accessing the model configuration
171
+ >>> configuration = model.config
172
+ ```"""
173
+
174
+ model_type = "siglip2_text_model"
175
+ base_config_key = "text_config"
176
+
177
+ def __init__(
178
+ self,
179
+ vocab_size=32000,
180
+ hidden_size=768,
181
+ intermediate_size=3072,
182
+ num_hidden_layers=12,
183
+ num_attention_heads=12,
184
+ max_position_embeddings=64,
185
+ hidden_act="gelu_pytorch_tanh",
186
+ layer_norm_eps=1e-6,
187
+ attention_dropout=0.0,
188
+ # This differs from `CLIPTokenizer`'s default and from openai/siglip2
189
+ # See https://github.com/huggingface/transformers/pull/24773#issuecomment-1632287538
190
+ pad_token_id=1,
191
+ bos_token_id=49406,
192
+ eos_token_id=49407,
193
+ projection_size=None,
194
+ **kwargs,
195
+ ):
196
+ super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
197
+
198
+ self.vocab_size = vocab_size
199
+ self.hidden_size = hidden_size
200
+ self.intermediate_size = intermediate_size
201
+ self.num_hidden_layers = num_hidden_layers
202
+ self.num_attention_heads = num_attention_heads
203
+ self.max_position_embeddings = max_position_embeddings
204
+ self.layer_norm_eps = layer_norm_eps
205
+ self.hidden_act = hidden_act
206
+ self.attention_dropout = attention_dropout
207
+ self.projection_size = projection_size if projection_size is not None else hidden_size
208
+
209
+
210
+ class Siglip2VisionConfig(PretrainedConfig):
211
+ r"""
212
+ This is the configuration class to store the configuration of a [`Siglip2VisionModel`]. It is used to instantiate a
213
+ Siglip2 vision encoder according to the specified arguments, defining the model architecture. Instantiating a
214
+ configuration with the defaults will yield a similar configuration to that of the vision encoder of the Siglip2
215
+ [google/siglip2-base-patch16-naflex](https://huggingface.co/google/siglip2-base-patch16-naflex) architecture.
216
+
217
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
218
+ documentation from [`PretrainedConfig`] for more information.
219
+
220
+ Args:
221
+ hidden_size (`int`, *optional*, defaults to 768):
222
+ Dimensionality of the encoder layers and the pooler layer.
223
+ intermediate_size (`int`, *optional*, defaults to 3072):
224
+ Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
225
+ num_hidden_layers (`int`, *optional*, defaults to 12):
226
+ Number of hidden layers in the Transformer encoder.
227
+ num_attention_heads (`int`, *optional*, defaults to 12):
228
+ Number of attention heads for each attention layer in the Transformer encoder.
229
+ num_channels (`int`, *optional*, defaults to 3):
230
+ Number of channels in the input images.
231
+ num_patches (`int`, *optional*, defaults to 256):
232
+ The number of patches in the image with the size of (`patch_size`, `patch_size`).
233
+ The image is resized to fill maximum of this number of patches, and to preserve
234
+ the aspect ratio. In case the resulted number of patches is lower, the image is
235
+ padded in "patch" dimension.
236
+ patch_size (`int`, *optional*, defaults to 16):
237
+ The size (resolution) of each patch.
238
+ hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
239
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
240
+ `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
241
+ layer_norm_eps (`float`, *optional*, defaults to 1e-06):
242
+ The epsilon used by the layer normalization layers.
243
+ attention_dropout (`float`, *optional*, defaults to 0.0):
244
+ The dropout ratio for the attention probabilities.
245
+
246
+ Example:
247
+
248
+ ```python
249
+ >>> from transformers import Siglip2VisionConfig, Siglip2VisionModel
250
+
251
+ >>> # Initializing a Siglip2VisionConfig with google/siglip2-base-patch16-naflex style configuration
252
+ >>> configuration = Siglip2VisionConfig()
253
+
254
+ >>> # Initializing a Siglip2VisionModel (with random weights) from the google/siglip2-base-patch16-naflex style configuration
255
+ >>> model = Siglip2VisionModel(configuration)
256
+
257
+ >>> # Accessing the model configuration
258
+ >>> configuration = model.config
259
+ ```"""
260
+
261
+ model_type = "siglip2_vision_model"
262
+ base_config_key = "vision_config"
263
+
264
+ def __init__(
265
+ self,
266
+ hidden_size=1152,
267
+ intermediate_size=4304,
268
+ num_hidden_layers=27,
269
+ num_attention_heads=16,
270
+ num_channels=3,
271
+ num_patches=256,
272
+ patch_size=16,
273
+ hidden_act="gelu_pytorch_tanh",
274
+ layer_norm_eps=1e-6,
275
+ attention_dropout=0.0,
276
+ **kwargs,
277
+ ):
278
+ super().__init__(**kwargs)
279
+
280
+ self.hidden_size = hidden_size
281
+ self.intermediate_size = intermediate_size
282
+ self.num_hidden_layers = num_hidden_layers
283
+ self.num_attention_heads = num_attention_heads
284
+ self.num_channels = num_channels
285
+ self.patch_size = patch_size
286
+ # self.image_size = 384 #fixme
287
+ self.attention_dropout = attention_dropout
288
+ self.layer_norm_eps = layer_norm_eps
289
+ self.hidden_act = hidden_act
290
+ self.num_patches = num_patches
291
+
292
+
293
+ class Siglip2Config(PretrainedConfig):
294
+ r"""
295
+ [`Siglip2Config`] is the configuration class to store the configuration of a [`Siglip2Model`]. It is used to
296
+ instantiate a Siglip2 model according to the specified arguments, defining the text model and vision model configs.
297
+ Instantiating a configuration with the defaults will yield a similar configuration to that of the Siglip2
298
+ [google/siglip2-base-patch16-224](https://huggingface.co/google/siglip2-base-patch16-224) architecture.
299
+
300
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
301
+ documentation from [`PretrainedConfig`] for more information.
302
+
303
+ Args:
304
+ text_config (`dict`, *optional*):
305
+ Dictionary of configuration options used to initialize [`Siglip2TextConfig`].
306
+ vision_config (`dict`, *optional*):
307
+ Dictionary of configuration options used to initialize [`Siglip2VisionConfig`].
308
+ kwargs (*optional*):
309
+ Dictionary of keyword arguments.
310
+
311
+ Example:
312
+
313
+ ```python
314
+ >>> from transformers import Siglip2Config, Siglip2Model
315
+
316
+ >>> # Initializing a Siglip2Config with google/siglip2-base-patch16-224 style configuration
317
+ >>> configuration = Siglip2Config()
318
+
319
+ >>> # Initializing a Siglip2Model (with random weights) from the google/siglip2-base-patch16-224 style configuration
320
+ >>> model = Siglip2Model(configuration)
321
+
322
+ >>> # Accessing the model configuration
323
+ >>> configuration = model.config
324
+
325
+ >>> # We can also initialize a Siglip2Config from a Siglip2TextConfig and a Siglip2VisionConfig
326
+ >>> from transformers import Siglip2TextConfig, Siglip2VisionConfig
327
+
328
+ >>> # Initializing a Siglip2Text and Siglip2Vision configuration
329
+ >>> config_text = Siglip2TextConfig()
330
+ >>> config_vision = Siglip2VisionConfig()
331
+
332
+ >>> config = Siglip2Config.from_text_vision_configs(config_text, config_vision)
333
+ ```"""
334
+
335
+ model_type = "siglip2"
336
+ sub_configs = {"text_config": Siglip2TextConfig, "vision_config": Siglip2VisionConfig}
337
+
338
+ def __init__(self, text_config=None, vision_config=None, **kwargs):
339
+ super().__init__(**kwargs)
340
+
341
+ if text_config is None:
342
+ text_config = {}
343
+ logger.info("`text_config` is `None`. Initializing the `Siglip2TextConfig` with default values.")
344
+
345
+ if vision_config is None:
346
+ vision_config = {}
347
+ logger.info("`vision_config` is `None`. initializing the `Siglip2VisionConfig` with default values.")
348
+
349
+ self.text_config = Siglip2TextConfig(**text_config)
350
+ self.vision_config = Siglip2VisionConfig(**vision_config)
351
+
352
+ self.initializer_factor = 1.0
353
+
354
+ @classmethod
355
+ def from_text_vision_configs(cls, text_config: Siglip2TextConfig, vision_config: Siglip2VisionConfig, **kwargs):
356
+ r"""
357
+ Instantiate a [`Siglip2Config`] (or a derived class) from siglip2 text model configuration and siglip2 vision
358
+ model configuration.
359
+
360
+ Returns:
361
+ [`Siglip2Config`]: An instance of a configuration object
362
+ """
363
+
364
+ return cls(text_config=text_config.to_dict(), vision_config=vision_config.to_dict(), **kwargs)
365
+ r"""
366
+ This is the configuration class to store the configuration of a [`Siglip2VisionModel`]. It is used to instantiate a
367
+ Siglip2 vision encoder according to the specified arguments, defining the model architecture. Instantiating a
368
+ configuration with the defaults will yield a similar configuration to that of the vision encoder of the Siglip2
369
+ [google/siglip2-base-patch16-naflex](https://huggingface.co/google/siglip2-base-patch16-naflex) architecture.
370
+
371
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
372
+ documentation from [`PretrainedConfig`] for more information.
373
+
374
+ Args:
375
+ hidden_size (`int`, *optional*, defaults to 768):
376
+ Dimensionality of the encoder layers and the pooler layer.
377
+ intermediate_size (`int`, *optional*, defaults to 3072):
378
+ Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
379
+ num_hidden_layers (`int`, *optional*, defaults to 12):
380
+ Number of hidden layers in the Transformer encoder.
381
+ num_attention_heads (`int`, *optional*, defaults to 12):
382
+ Number of attention heads for each attention layer in the Transformer encoder.
383
+ num_channels (`int`, *optional*, defaults to 3):
384
+ Number of channels in the input images.
385
+ num_patches (`int`, *optional*, defaults to 256):
386
+ The number of patches in the image with the size of (`patch_size`, `patch_size`).
387
+ The image is resized to fill maximum of this number of patches, and to preserve
388
+ the aspect ratio. In case the resulted number of patches is lower, the image is
389
+ padded in "patch" dimension.
390
+ patch_size (`int`, *optional*, defaults to 16):
391
+ The size (resolution) of each patch.
392
+ hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`):
393
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
394
+ `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
395
+ layer_norm_eps (`float`, *optional*, defaults to 1e-06):
396
+ The epsilon used by the layer normalization layers.
397
+ attention_dropout (`float`, *optional*, defaults to 0.0):
398
+ The dropout ratio for the attention probabilities.
399
+
400
+ Example:
401
+
402
+ ```python
403
+ >>> from transformers import Siglip2VisionConfig, Siglip2VisionModel
404
+
405
+ >>> # Initializing a Siglip2VisionConfig with google/siglip2-base-patch16-naflex style configuration
406
+ >>> configuration = Siglip2VisionConfig()
407
+
408
+ >>> # Initializing a Siglip2VisionModel (with random weights) from the google/siglip2-base-patch16-naflex style configuration
409
+ >>> model = Siglip2VisionModel(configuration)
410
+
411
+ >>> # Accessing the model configuration
412
+ >>> configuration = model.config
413
+ ```"""
414
+
415
+ model_type = "siglip2_vision_model"
416
+ base_config_key = "vision_config"
417
+
418
+ def __init__(
419
+ self,
420
+ hidden_size=768,
421
+ intermediate_size=3072,
422
+ num_hidden_layers=12,
423
+ num_attention_heads=12,
424
+ num_channels=3,
425
+ num_patches=256,
426
+ patch_size=16,
427
+ hidden_act="gelu_pytorch_tanh",
428
+ layer_norm_eps=1e-6,
429
+ attention_dropout=0.0,
430
+ **kwargs,
431
+ ):
432
+ super().__init__(**kwargs)
433
+
434
+ self.hidden_size = hidden_size
435
+ self.intermediate_size = intermediate_size
436
+ self.num_hidden_layers = num_hidden_layers
437
+ self.num_attention_heads = num_attention_heads
438
+ self.num_channels = num_channels
439
+ self.patch_size = patch_size
440
+ self.attention_dropout = attention_dropout
441
+ self.layer_norm_eps = layer_norm_eps
442
+ self.hidden_act = hidden_act
443
+ self.num_patches = num_patches
444
+
445
+ logger = logging.get_logger(__name__)
446
+
447
+ # General docstring
448
+ _CONFIG_FOR_DOC = "Siglip2VisionConfig"
449
+
450
+
451
+ @dataclass
452
+ class Siglip2VisionOutput(ModelOutput):
453
+ """
454
+ Base class for vision model's outputs that also contains image embeddings of the pooling of the last hidden states.
455
+
456
+ Args:
457
+ image_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim)` *optional* returned when model is initialized with `with_projection=True`):
458
+ The image embeddings obtained by applying the projection layer to the pooler_output.
459
+ last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
460
+ Sequence of hidden-states at the output of the last layer of the model.
461
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
462
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
463
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
464
+
465
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
466
+ attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
467
+ Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
468
+ sequence_length)`.
469
+
470
+ Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
471
+ heads.
472
+ """
473
+
474
+ image_embeds: Optional[torch.FloatTensor] = None
475
+ last_hidden_state: torch.FloatTensor = None
476
+ hidden_states: Optional[Tuple[torch.FloatTensor, ...]] = None
477
+ attentions: Optional[Tuple[torch.FloatTensor, ...]] = None
478
+
479
+
480
+ class Siglip2VisionEmbeddings(nn.Module):
481
+ def __init__(self, config: Siglip2VisionConfig):
482
+ super().__init__()
483
+ self.config = config
484
+ self.embed_dim = config.hidden_size
485
+ self.image_size = config.image_size
486
+ self.patch_size = config.patch_size
487
+ self.patch_embedding = nn.Conv2d(
488
+ in_channels=config.num_channels,
489
+ out_channels=self.embed_dim,
490
+ kernel_size=self.patch_size,
491
+ stride=self.patch_size,
492
+ padding="valid",
493
+ )
494
+ # import pdb; pdb.set_trace()
495
+ self.num_patches_per_side = self.image_size // self.patch_size
496
+ self.num_patches = self.num_patches_per_side**2
497
+ self.num_positions = self.num_patches
498
+ self.position_embedding = nn.Embedding(self.num_positions, self.embed_dim)
499
+
500
+ def forward(self, pixel_values: torch.FloatTensor, spatial_shapes: torch.LongTensor) -> torch.Tensor:
501
+ """
502
+ Args:
503
+ ### 原始版本 ###
504
+ pixel_values (`torch.FloatTensor`):
505
+ Pixel values of shape (batch_size, max_num_patches, num_channels * patch_size * patch_size)
506
+ ### 修改版本 ###
507
+ pixel_values (`List`):
508
+ [C, H, W]
509
+ spatial_shapes (`List[Tuple[int, int]]`):
510
+ Spatial shapes of shape (batch_size, 2) to resize the positional embeddings to
511
+ """
512
+ batch_size = len(pixel_values)
513
+ target_dtype = self.patch_embedding.weight.dtype
514
+ patch_embeds = []
515
+ max_seq_len = max(h * w for h, w in spatial_shapes)
516
+ boundaries = torch.arange(1 / self.num_patches_per_side, 1.0, 1 / self.num_patches_per_side)
517
+ position_ids = torch.full(
518
+ size=(
519
+ batch_size,
520
+ max_seq_len,
521
+ ),
522
+ fill_value=0,
523
+ )
524
+ for batch_idx, image in enumerate(pixel_values):
525
+ single_image_patch_embed = self.patch_embedding(image.to(dtype=target_dtype)) ### (bs, dim, h, w)
526
+ single_embed = rearrange(single_image_patch_embed, 'b d h w -> b (h w) d')
527
+ patch_embeds.append(single_embed.squeeze(0))
528
+
529
+ nb_patches_h = spatial_shapes[batch_idx][0]
530
+ nb_patches_w = spatial_shapes[batch_idx][1]
531
+ fractional_coords_h = torch.arange(0, 1 - 1e-6, 1 / nb_patches_h)
532
+ fractional_coords_w = torch.arange(0, 1 - 1e-6, 1 / nb_patches_w)
533
+ bucket_coords_h = torch.bucketize(fractional_coords_h, boundaries, right=True)
534
+ bucket_coords_w = torch.bucketize(fractional_coords_w, boundaries, right=True)
535
+ pos_ids = (bucket_coords_h[:, None] * self.num_patches_per_side + bucket_coords_w).flatten()
536
+ position_ids[batch_idx][:nb_patches_h*nb_patches_w] = pos_ids
537
+ embeddings = torch.nn.utils.rnn.pad_sequence(patch_embeds, batch_first=True, padding_value=0.0)
538
+ position_ids = position_ids.to(self.position_embedding.weight.device)
539
+ embeddings = embeddings + self.position_embedding(position_ids)
540
+ return embeddings
541
+
542
+ def apply_rope(xq, xk, freqs_cis, use_flash_attention=False):
543
+ if freqs_cis is None:
544
+ if use_flash_attention:
545
+ return xq, xk
546
+ else:
547
+ return xq.transpose(1, 2), xk.transpose(1, 2)
548
+ freqs_cis = freqs_cis.unsqueeze(-2) # ..., 1, head_dim/2
549
+ # ..., num_heads, head_dim/2
550
+ xq_ = torch.view_as_complex(xq.float().view(*xq.shape[:-1], -1, 2))
551
+ xk_ = torch.view_as_complex(xk.float().view(*xq.shape[:-1], -1, 2))
552
+ xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(-2) # ..., num_heads, head_dim
553
+ xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(-2) # ..., num_heads, head_dim
554
+ xq_out = xq_out.type_as(xq)
555
+ xk_out = xk_out.type_as(xk)
556
+ if use_flash_attention:
557
+ return xq_out, xk_out
558
+ else:
559
+ return xq_out.transpose(1, 2), xk_out.transpose(1, 2)
560
+
561
+ class Siglip2Attention(nn.Module):
562
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
563
+
564
+ def __init__(self, config):
565
+ super().__init__()
566
+ self.config = config
567
+ self.embed_dim = config.hidden_size
568
+ self.num_heads = config.num_attention_heads
569
+ self.head_dim = self.embed_dim // self.num_heads
570
+ if self.head_dim * self.num_heads != self.embed_dim:
571
+ raise ValueError(
572
+ f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:"
573
+ f" {self.num_heads})."
574
+ )
575
+ self.scale = self.head_dim**-0.5
576
+ self.dropout = config.attention_dropout
577
+
578
+ self.k_proj = nn.Linear(self.embed_dim, self.embed_dim)
579
+ self.v_proj = nn.Linear(self.embed_dim, self.embed_dim)
580
+ self.q_proj = nn.Linear(self.embed_dim, self.embed_dim)
581
+ self.out_proj = nn.Linear(self.embed_dim, self.embed_dim)
582
+
583
+ def forward(
584
+ self,
585
+ hidden_states: torch.Tensor,
586
+ attention_mask: Optional[torch.Tensor] = None,
587
+ output_attentions: Optional[bool] = False,
588
+ position_embedding: Optional[torch.Tensor] = None,
589
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
590
+ """Input shape: Batch x Time x Channel"""
591
+
592
+ batch_size, q_len, _ = hidden_states.size()
593
+
594
+ query_states = self.q_proj(hidden_states)
595
+ key_states = self.k_proj(hidden_states)
596
+ value_states = self.v_proj(hidden_states)
597
+
598
+ query_states = query_states.view(batch_size, q_len, self.num_heads, self.head_dim)
599
+ key_states = key_states.view(batch_size, q_len, self.num_heads, self.head_dim)
600
+ ### 添加位置编码 ###
601
+ query_states, key_states = apply_rope(query_states, key_states, position_embedding)
602
+
603
+ value_states = value_states.view(batch_size, q_len, self.num_heads, self.head_dim).transpose(1, 2)
604
+
605
+ k_v_seq_len = key_states.shape[-2]
606
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) * self.scale
607
+
608
+ if attn_weights.size() != (batch_size, self.num_heads, q_len, k_v_seq_len):
609
+ raise ValueError(
610
+ f"Attention weights should be of size {(batch_size, self.num_heads, q_len, k_v_seq_len)}, but is"
611
+ f" {attn_weights.size()}"
612
+ )
613
+
614
+ if attention_mask is not None:
615
+ if attention_mask.size() != (batch_size, 1, q_len, k_v_seq_len):
616
+ raise ValueError(
617
+ f"Attention mask should be of size {(batch_size, 1, q_len, k_v_seq_len)}, but is {attention_mask.size()}"
618
+ )
619
+ attn_weights = attn_weights + attention_mask
620
+
621
+ # upcast attention to fp32
622
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
623
+ attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
624
+ attn_output = torch.matmul(attn_weights, value_states)
625
+
626
+ if attn_output.size() != (batch_size, self.num_heads, q_len, self.head_dim):
627
+ raise ValueError(
628
+ f"`attn_output` should be of size {(batch_size, self.num_heads, q_len, self.head_dim)}, but is"
629
+ f" {attn_output.size()}"
630
+ )
631
+
632
+ attn_output = attn_output.transpose(1, 2).contiguous()
633
+ attn_output = attn_output.reshape(batch_size, q_len, self.embed_dim)
634
+
635
+ attn_output = self.out_proj(attn_output)
636
+
637
+ return attn_output, attn_weights
638
+
639
+
640
+ class Siglip2SdpaAttention(Siglip2Attention):
641
+ """
642
+ Siglip2 attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
643
+ `Siglip2Attention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
644
+ SDPA API.
645
+ """
646
+
647
+ is_causal = False
648
+
649
+ # Adapted from Siglip2Attention.forward and transformers.models.llama.modeling_llama.LlamaSdpaAttention.forward
650
+ def forward(
651
+ self,
652
+ hidden_states: torch.Tensor,
653
+ attention_mask: Optional[torch.Tensor] = None,
654
+ output_attentions: Optional[bool] = False,
655
+ position_embedding: Optional[torch.Tensor] = None,
656
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
657
+ if output_attentions:
658
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
659
+ logger.warning_once(
660
+ "Siglip2Model is using Siglip2SdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
661
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
662
+ )
663
+ return super().forward(
664
+ hidden_states=hidden_states,
665
+ attention_mask=attention_mask,
666
+ output_attentions=output_attentions,
667
+ )
668
+
669
+ batch_size, q_len, _ = hidden_states.size()
670
+
671
+ query_states = self.q_proj(hidden_states)
672
+ key_states = self.k_proj(hidden_states)
673
+ value_states = self.v_proj(hidden_states)
674
+
675
+ ### 添加位置编码 ###
676
+ query_states = query_states.view(batch_size, q_len, self.num_heads, self.head_dim)
677
+ key_states = key_states.view(batch_size, q_len, self.num_heads, self.head_dim)
678
+ query_states, key_states = apply_rope(query_states, key_states, position_embedding)
679
+
680
+ # query_states = query_states.view(batch_size, q_len, self.num_heads, self.head_dim).transpose(1, 2)
681
+ # key_states = key_states.view(batch_size, q_len, self.num_heads, self.head_dim).transpose(1, 2)
682
+ value_states = value_states.view(batch_size, q_len, self.num_heads, self.head_dim).transpose(1, 2)
683
+
684
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
685
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
686
+ if query_states.device.type == "cuda" and attention_mask is not None:
687
+ query_states = query_states.contiguous()
688
+ key_states = key_states.contiguous()
689
+ value_states = value_states.contiguous()
690
+
691
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of an inline conditional assignment
692
+ # in SDPA to support both torch.compile's dynamic shapes and full graph options. An inline conditional prevents dynamic shapes from compiling.
693
+ is_causal = True if self.is_causal and q_len > 1 else False
694
+
695
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
696
+ query_states,
697
+ key_states,
698
+ value_states,
699
+ attn_mask=attention_mask,
700
+ dropout_p=self.dropout if self.training else 0.0,
701
+ is_causal=is_causal,
702
+ )
703
+
704
+ attn_output = attn_output.transpose(1, 2).contiguous()
705
+ attn_output = attn_output.view(batch_size, q_len, self.embed_dim)
706
+
707
+ attn_output = self.out_proj(attn_output)
708
+
709
+ return attn_output, None
710
+
711
+ class Siglip2FlashAttention2(Siglip2Attention):
712
+ """
713
+ Siglip2Attention flash attention module. This module inherits from `Siglip2Attention` as the weights of the module stays
714
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
715
+ flash attention and deal with padding tokens in case the input contains any of them.
716
+ """
717
+
718
+ is_causal = False
719
+
720
+ def __init__(self, *args, **kwargs):
721
+ super().__init__(*args, **kwargs)
722
+
723
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
724
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
725
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
726
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
727
+
728
+ # Adapted from transformers.models.llama.modeling_llama.LlamaFlashAttention2.forward
729
+ def forward(
730
+ self,
731
+ hidden_states: torch.Tensor,
732
+ attention_mask: Optional[torch.LongTensor] = None,
733
+ output_attentions: bool = False,
734
+ position_embedding: Optional[torch.Tensor] = None,
735
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
736
+ output_attentions = False
737
+
738
+ batch_size, q_len, _ = hidden_states.size()
739
+
740
+ query_states = self.q_proj(hidden_states)
741
+ key_states = self.k_proj(hidden_states)
742
+ value_states = self.v_proj(hidden_states)
743
+
744
+ # Flash attention requires the input to have the shape
745
+ # batch_size x seq_length x head_dim x hidden_dim
746
+ # therefore we just need to keep the original shape
747
+ query_states = query_states.view(batch_size, q_len, self.num_heads, self.head_dim)
748
+ key_states = key_states.view(batch_size, q_len, self.num_heads, self.head_dim)
749
+ value_states = value_states.view(batch_size, q_len, self.num_heads, self.head_dim)
750
+
751
+ ### 添加位置编码 ###
752
+ query_states, key_states = apply_rope(query_states, key_states, position_embedding, use_flash_attention=True)
753
+ dropout_rate = self.dropout if self.training else 0.0
754
+
755
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
756
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
757
+ # cast them back in the correct dtype just to be sure everything works as expected.
758
+ # This might slowdown training & inference so it is recommended to not cast the LayerNorms
759
+ # in fp32.
760
+
761
+ input_dtype = query_states.dtype
762
+ if input_dtype == torch.float32:
763
+ if torch.is_autocast_enabled():
764
+ target_dtype = torch.get_autocast_gpu_dtype()
765
+ # Handle the case where the model is quantized
766
+ elif hasattr(self.config, "_pre_quantization_dtype"):
767
+ target_dtype = self.config._pre_quantization_dtype
768
+ else:
769
+ target_dtype = self.q_proj.weight.dtype
770
+
771
+ logger.warning_once(
772
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
773
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
774
+ f" {target_dtype}."
775
+ )
776
+
777
+ query_states = query_states.to(target_dtype)
778
+ key_states = key_states.to(target_dtype)
779
+ value_states = value_states.to(target_dtype)
780
+ attn_output = _flash_attention_forward(
781
+ query_states,
782
+ key_states,
783
+ value_states,
784
+ attention_mask,
785
+ q_len,
786
+ dropout=dropout_rate,
787
+ is_causal=self.is_causal,
788
+ use_top_left_mask=self._flash_attn_uses_top_left_mask,
789
+ )
790
+
791
+ attn_output = attn_output.reshape(batch_size, q_len, self.embed_dim).contiguous()
792
+ attn_output = self.out_proj(attn_output)
793
+
794
+ if not output_attentions:
795
+ attn_weights = None
796
+
797
+ return attn_output, attn_weights
798
+
799
+ class Siglip2MLP(nn.Module):
800
+ def __init__(self, config):
801
+ super().__init__()
802
+ self.config = config
803
+ self.activation_fn = ACT2FN[config.hidden_act]
804
+ self.fc1 = nn.Linear(config.hidden_size, config.intermediate_size)
805
+ self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size)
806
+
807
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
808
+ hidden_states = self.fc1(hidden_states)
809
+ hidden_states = self.activation_fn(hidden_states)
810
+ hidden_states = self.fc2(hidden_states)
811
+ return hidden_states
812
+
813
+
814
+ SIGLIP2_ATTENTION_CLASSES = {
815
+ "eager": Siglip2Attention,
816
+ "sdpa": Siglip2SdpaAttention,
817
+ "flash_attention_2": Siglip2FlashAttention2,
818
+ }
819
+
820
+ ### 如果是每层有可能做merger操作,可以在Siglip2EncoderLayer中插入一些层,做一些判断,如果为True就使用,如果为False就不用
821
+ ### 同时维护attention_mask,如果做merger就乘上attention_mask
822
+ ### TODO: 简化代码 ###
823
+ class PatchMergingLayer(nn.Module):
824
+ def __init__(self, embed_dim, enable_merging=True, merging_method="avg_pooling", norm_layer=nn.LayerNorm):
825
+ """
826
+ :param embed_dim: Transformer token 的嵌入维度
827
+ :param enable_merging: 是否启用 token 合并功能
828
+ :param merging_method: 选择 'mlp' 或 'avg_pooling' 作为合并方式
829
+ """
830
+ super().__init__()
831
+ self.enable_merging = enable_merging
832
+ self.merging_method = merging_method
833
+ self.reduction = nn.Identity()
834
+ self.norm = nn.Identity()
835
+ self.res_reduction = nn.Identity()
836
+ self.res_norm = nn.Identity()
837
+ self.zero_init_fc = nn.Linear(embed_dim, embed_dim, bias=False)
838
+
839
+ if self.merging_method == 'mlp':
840
+ self.reduction = nn.Sequential(
841
+ nn.Linear(4 * embed_dim, 4 * embed_dim, bias=False),
842
+ nn.GELU(),
843
+ nn.Linear(4 * embed_dim, embed_dim, bias=False),
844
+ )
845
+ self.norm = norm_layer(4 * embed_dim)
846
+
847
+ elif self.merging_method == 'avg_pooling':
848
+ pass
849
+
850
+ elif self.merging_method == 'max_pooling':
851
+ pass
852
+
853
+ elif self.merging_method == 'resampler':
854
+ self.reduction = nn.Sequential(
855
+ nn.Linear(embed_dim, embed_dim),
856
+ nn.GELU(),
857
+ nn.Linear(embed_dim, embed_dim),
858
+ )
859
+
860
+ self.q_norm = norm_layer(embed_dim)
861
+ self.k_norm = norm_layer(embed_dim)
862
+ self.v_norm = norm_layer(embed_dim)
863
+ self.q_proj = nn.Linear(embed_dim, embed_dim, bias=False)
864
+ self.k_proj = nn.Sequential(
865
+ nn.Linear(embed_dim, embed_dim),
866
+ nn.GELU(),
867
+ nn.Linear(embed_dim, embed_dim),
868
+ )
869
+ self.v_proj = nn.Sequential(
870
+ nn.Linear(embed_dim, embed_dim),
871
+ nn.GELU(),
872
+ nn.Linear(embed_dim, embed_dim),
873
+ )
874
+ self.attn = nn.MultiheadAttention(embed_dim, 16)
875
+ elif self.merging_method == 'avg_and_resampler':
876
+ self.res_reduction = nn.Sequential(
877
+ nn.Linear(embed_dim, embed_dim, bias=False),
878
+ nn.GELU(),
879
+ nn.Linear(embed_dim, embed_dim, bias=False),
880
+ )
881
+ self.res_norm = norm_layer(embed_dim)
882
+ self.k_norm = norm_layer(embed_dim)
883
+ self.v_norm = norm_layer(embed_dim)
884
+ self.k_proj = nn.Sequential(
885
+ nn.Linear(embed_dim, embed_dim),
886
+ nn.GELU(),
887
+ nn.Linear(embed_dim, embed_dim),
888
+ )
889
+ self.v_proj = nn.Sequential(
890
+ nn.Linear(embed_dim, embed_dim),
891
+ nn.GELU(),
892
+ nn.Linear(embed_dim, embed_dim),
893
+ )
894
+ self.attn = nn.MultiheadAttention(embed_dim, 16)
895
+
896
+ elif self.merging_method == 'avg_and_mlp':
897
+ self.res_reduction = nn.Sequential(
898
+ nn.Linear(4 * embed_dim, 4 * embed_dim, bias=False),
899
+ nn.GELU(),
900
+ nn.Linear(4 * embed_dim, embed_dim, bias=False),
901
+ )
902
+ self.res_norm = norm_layer(4 * embed_dim)
903
+
904
+ def forward(self, x, spatial_shapes, attention_mask=None):
905
+ if not self.enable_merging:
906
+ return x, spatial_shapes, attention_mask
907
+ ### 将输入x作为残差特征用于最后的特征融合 ###
908
+ feature_x = x
909
+ ###TODO:确定一下输入维度,确定没问题 ###
910
+ batch_size, max_seq_len, embed_dim = x.shape
911
+ # output_x = x.clone()
912
+ # output_attention_mask = torch.zeros_like(attention_mask, dtype=attention_mask.dtype, device=attention_mask.device)
913
+ output_x = torch.zeros_like(x[:, :max_seq_len//4, :], dtype=x.dtype, device=x.device)
914
+ if (attention_mask == 1).any():
915
+ output_attention_mask = torch.zeros((batch_size, max_seq_len//4), dtype=attention_mask.dtype, device=attention_mask.device)
916
+ else:
917
+ output_attention_mask = torch.zeros((batch_size, 1, max_seq_len//4, max_seq_len//4), dtype=attention_mask.dtype, device=attention_mask.device)
918
+ res_list = []
919
+ x_i_list = []
920
+ idx_list = []
921
+ seq_len_list = []
922
+ idx = 0
923
+ for i, spatial_shape in enumerate(spatial_shapes):
924
+ H, W = spatial_shape
925
+ x_i = x[i][:H*W].reshape(H, W, embed_dim)
926
+ if self.merging_method == 'mlp':
927
+ x_i = rearrange(x_i, '(h p1) (w p2) c -> (h w) (p1 p2 c)', p1=2, p2=2)
928
+ x_i_list.append(x_i)
929
+ elif self.merging_method == 'avg_pooling':
930
+ x_i = rearrange(x_i, 'h w c -> c h w')
931
+ x_i = F.avg_pool2d(x_i, kernel_size=2, stride=2) # 2x2 池化
932
+ x_i = rearrange(x_i, 'c h w -> (h w) c') # 重新展平
933
+ x_i_list.append(x_i)
934
+ elif self.merging_method == 'max_pooling':
935
+ x_i = rearrange(x_i, 'h w c -> c h w')
936
+ x_i = F.max_pool2d(x_i, kernel_size=2, stride=2) # 2x2 最大池化
937
+ x_i = rearrange(x_i, 'c h w -> (h w) c')
938
+ x_i_list.append(x_i)
939
+ elif self.merging_method == 'resampler':
940
+ k = rearrange(x_i, '(h p1) (w p2) c -> (h w) (p1 p2) c', p1=2, p2=2)
941
+ v = k
942
+ x_i = rearrange(x_i, 'h w c -> c h w')
943
+ q = F.avg_pool2d(x_i, kernel_size=2, stride=2) # 2x2 池化
944
+ q_res = rearrange(q, 'c h w -> (h w) c') # 重新展平
945
+ q = rearrange(q_res, 'n c h w -> (h w) n c', n=1)
946
+ q = self.q_norm(self.q_proj(q)).permute(1, 0, 2)
947
+ k = self.k_norm(self.k_proj(k)).permute(1, 0, 2)
948
+ v = self.v_norm(self.v_proj(v)).permute(1, 0, 2)
949
+ out = self.attn(q,k,v)[0]
950
+ x_i = out.squeeze(0) + q_res
951
+ x_i_list.append(x_i)
952
+ elif self.merging_method == "avg_and_resampler":
953
+ ### 实现res部分 ###
954
+ k = rearrange(x_i, '(h p1) (w p2) c -> (h w) (p1 p2) c', p1=2, p2=2)
955
+ v = k
956
+ x_i_res = rearrange(x_i, 'h w c -> c h w')
957
+ q = F.avg_pool2d(x_i_res, kernel_size=2, stride=2) # 2x2 池化
958
+ q = rearrange(q, 'c h w -> (h w) c') # 重新展平
959
+ q = q.unsqueeze(0)
960
+ k = self.k_norm(self.k_proj(k)).permute(1, 0, 2)
961
+ v = self.v_norm(self.v_proj(v)).permute(1, 0, 2)
962
+ out = self.attn(q,k,v)[0]
963
+ x_i_res = out.squeeze(0)
964
+ res_list.append(x_i_res)
965
+ ### 实现正常前向部分 ###
966
+ x_i = rearrange(x_i, 'c h w -> c h w')
967
+ x_i = F.avg_pool2d(x_i, kernel_size=2, stride=2) # 2x2 池化
968
+ x_i = rearrange(x_i, 'c h w -> (h w) c') # 重新展平
969
+ x_i_list.append(x_i)
970
+
971
+ elif self.merging_method == 'avg_and_mlp':
972
+ ### 实现res部分 ###
973
+ x_i_res = rearrange(x_i, '(h p1) (w p2) c -> (h w) (p1 p2 c)', p1=2, p2=2)
974
+ res_list.append(x_i_res)
975
+ ### 实现正常前向部分 ###
976
+ x_i = rearrange(x_i, 'h w c -> c h w')
977
+ x_i = F.avg_pool2d(x_i, kernel_size=2, stride=2) # 2x2 池化
978
+ x_i = rearrange(x_i, 'c h w -> (h w) c') # 重新展平
979
+ x_i_list.append(x_i)
980
+
981
+ seq_len = x_i.size(0)
982
+ seq_len_list.append(seq_len)
983
+ idx_list.append((idx, idx + seq_len))
984
+ idx += seq_len
985
+ ### 正常前向 ###
986
+ new_x = torch.cat(x_i_list, dim=0)
987
+ new_x = self.norm(new_x)
988
+ new_x = self.reduction(new_x)
989
+ ### 增加res前向 ###
990
+ if res_list != []:
991
+ res_x = torch.cat(res_list, dim=0)
992
+ res_x = self.res_norm(res_x)
993
+ res_x = self.res_reduction(res_x)
994
+ res_x = self.zero_init_fc(res_x)
995
+ new_x += res_x
996
+
997
+ for i in range(batch_size):
998
+ m, n = idx_list[i]
999
+ seq_len = seq_len_list[i]
1000
+ output_x[i][:seq_len] = new_x[m:n]
1001
+ if attention_mask is not None:
1002
+ if (attention_mask == 1).any():
1003
+ output_attention_mask[i][:seq_len] = 1
1004
+ else:
1005
+ inf_value = torch.finfo(attention_mask.dtype).min
1006
+ output_attention_mask[i][0][:, seq_len:] = inf_value
1007
+ return output_x, spatial_shapes // 2, output_attention_mask, feature_x
1008
+
1009
+
1010
+ class Siglip2EncoderLayer(nn.Module):
1011
+ def __init__(self, config: Siglip2Config, layer_index):
1012
+ super().__init__()
1013
+ self.embed_dim = config.hidden_size
1014
+ self.self_attn = SIGLIP2_ATTENTION_CLASSES[config._attn_implementation](config=config)
1015
+ self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
1016
+ self.mlp = Siglip2MLP(config)
1017
+ self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps)
1018
+ # add layer_index 来指示哪里存在merger层
1019
+ self.position_embed_dim = self.embed_dim//config.num_attention_heads
1020
+ self.layer_index = layer_index
1021
+ if hasattr(config, 'vision_config'):
1022
+ if layer_index in config.vision_config['merger_layer_index']:
1023
+ self.merger = PatchMergingLayer(config.hidden_size, merging_method=config.vision_config['merging_method'])
1024
+ else:
1025
+ self.merger = None
1026
+ else:
1027
+ if layer_index in config.merger_layer_index:
1028
+ self.merger = PatchMergingLayer(config.hidden_size, merging_method=config.merging_method)
1029
+ else:
1030
+ self.merger = None
1031
+
1032
+ def get_position_embedding(self, position_embedding, spatial_shapes, target_length=None):
1033
+ shapes = spatial_shapes.tolist()
1034
+ _position_embedding = [position_embedding[:h, :w].reshape(-1, self.position_embed_dim // 2) for h, w in shapes]
1035
+
1036
+ real_list = [p.real for p in _position_embedding]
1037
+ imag_list = [p.imag for p in _position_embedding]
1038
+
1039
+ real_padded = torch.nn.utils.rnn.pad_sequence(real_list, batch_first=True, padding_value=1.0)
1040
+ imag_padded = torch.nn.utils.rnn.pad_sequence(imag_list, batch_first=True, padding_value=0.0)
1041
+
1042
+ position_embedding_complex = torch.complex(real_padded, imag_padded)
1043
+ return position_embedding_complex
1044
+
1045
+ # Ignore copy
1046
+ def forward(
1047
+ self,
1048
+ hidden_states: torch.Tensor,
1049
+ spatial_shapes,
1050
+ attention_mask: torch.Tensor,
1051
+ position_embedding,
1052
+ output_attentions: Optional[bool] = False,
1053
+ ) -> Tuple[torch.FloatTensor]:
1054
+ """
1055
+ Args:
1056
+ hidden_states (`torch.FloatTensor`):
1057
+ Input to the layer of shape `(batch, seq_len, embed_dim)`.
1058
+ attention_mask (`torch.FloatTensor`):
1059
+ Attention mask of shape `(batch, 1, q_len, k_v_seq_len)` where padding elements are indicated by very large negative values.
1060
+ output_attentions (`bool`, *optional*, defaults to `False`):
1061
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
1062
+ returned tensors for more detail.
1063
+ """
1064
+ if position_embedding is not None:
1065
+ position_embedding = self.get_position_embedding(position_embedding, spatial_shapes)
1066
+ residual = hidden_states
1067
+
1068
+ hidden_states = self.layer_norm1(hidden_states)
1069
+ hidden_states, attn_weights = self.self_attn(
1070
+ hidden_states=hidden_states,
1071
+ attention_mask=attention_mask,
1072
+ output_attentions=output_attentions,
1073
+ position_embedding=position_embedding,
1074
+ )
1075
+ hidden_states = residual + hidden_states
1076
+
1077
+ residual = hidden_states
1078
+ hidden_states = self.layer_norm2(hidden_states)
1079
+ hidden_states = self.mlp(hidden_states)
1080
+ hidden_states = residual + hidden_states
1081
+
1082
+ # 如果有merger
1083
+ if self.merger is not None:
1084
+ hidden_states, spatial_shapes, attention_mask, feature_x = self.merger(hidden_states, spatial_shapes, attention_mask)
1085
+ outputs = (hidden_states, spatial_shapes, attention_mask, attn_weights, feature_x)
1086
+ else:
1087
+ outputs = (hidden_states,)
1088
+
1089
+ if output_attentions:
1090
+ outputs += (attn_weights,)
1091
+
1092
+ return outputs
1093
+
1094
+ class FusedLayer(nn.Module):
1095
+ def __init__(self, dim, down_scale_times):
1096
+ super().__init__()
1097
+ self.dim = dim
1098
+ self.down_scale_times = down_scale_times
1099
+ self.predictor = nn.ModuleList([nn.Sequential(
1100
+ nn.Linear(dim*2, dim),
1101
+ nn.GELU(),
1102
+ nn.Linear(dim, dim),
1103
+ ) for _ in range(down_scale_times)])
1104
+ self.ln_list = nn.ModuleList([nn.LayerNorm(dim) for _ in range(down_scale_times)])
1105
+
1106
+ def forward(self, hidden_states, feature_x_list, spatial_shapes, use_fused_layer=True):
1107
+ if not use_fused_layer:
1108
+ return hidden_states
1109
+ else:
1110
+ fused_features = []
1111
+ for batch_idx, spatial_shape in enumerate(spatial_shapes):
1112
+ cur_h = spatial_shape[0]
1113
+ cur_w = spatial_shape[1]
1114
+ cur_new_feature_x = []
1115
+ for down_scale_idx, feature_x in enumerate(feature_x_list):
1116
+ feature_x = feature_x[batch_idx]
1117
+ down_scale_rate = (self.down_scale_times - down_scale_idx) * 2
1118
+ feature_x_h = down_scale_rate * cur_h
1119
+ feature_x_w = down_scale_rate * cur_w
1120
+ new_feature_x = feature_x[:feature_x_h*feature_x_w, :]
1121
+ # import pdb; pdb.set_trace()
1122
+ new_feature_x = rearrange(new_feature_x, '(h w) d -> h w d', h=feature_x_h, w=feature_x_w)
1123
+ new_feature_x = rearrange(new_feature_x, '(cur_h p1) (cur_w p2) d -> (cur_h cur_w) (p1 p2) d', cur_h=cur_h, cur_w=cur_w)
1124
+ pooled_feature_x = new_feature_x.mean(-2, keepdim=True).expand(-1, down_scale_rate**2, -1)
1125
+ fused_feature_x = torch.cat([new_feature_x, pooled_feature_x], dim=-1)
1126
+ score = self.predictor[down_scale_idx](fused_feature_x)
1127
+ normalized_score = F.softmax(score, dim=-2)
1128
+ new_feature_x = (new_feature_x * normalized_score).sum(dim=-2)
1129
+ new_feature_x = self.ln_list[down_scale_idx](new_feature_x)
1130
+ cur_new_feature_x.append(new_feature_x)
1131
+ ### avg_pooling ###
1132
+ # import pdb; pdb.set_trace()
1133
+ cur_new_feature_x = torch.stack(cur_new_feature_x, dim=0)
1134
+ fused_features.append(cur_new_feature_x)
1135
+ # cur_new_hidden_states = torch.mean(cur_new_feature_x, dim=0)
1136
+ # fused_features[batch_idx][:cur_h*cur_w, :] = cur_new_hidden_states
1137
+ return (hidden_states, fused_features)
1138
+
1139
+
1140
+ class Siglip2Encoder(nn.Module):
1141
+ """
1142
+ Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a
1143
+ [`Siglip2EncoderLayer`].
1144
+
1145
+ Args:
1146
+ config: Siglip2Config
1147
+ """
1148
+
1149
+ def __init__(self, config: Siglip2Config):
1150
+ super().__init__()
1151
+ self.config = config
1152
+ self.layers = nn.ModuleList([Siglip2EncoderLayer(config, layer_index=i) for i in range(config.num_hidden_layers)])
1153
+ self.gradient_checkpointing = False
1154
+
1155
+ ############ 比较重要的改动 ############
1156
+ if hasattr(config, 'vision_config'):
1157
+ self.use_fused_layer = False if 'use_fused_layer' not in config.vision_config else config.vision_config['use_fused_layer']
1158
+ if self.use_fused_layer:
1159
+ self.fused_layer = FusedLayer(config.hidden_size, len(config.vision_config['merger_layer_index']))
1160
+ else:
1161
+ self.use_fused_layer = False if 'use_fused_layer' not in config else config.use_fused_layer
1162
+ if self.use_fused_layer:
1163
+ self.fused_layer = FusedLayer(config.hidden_size, len(config.merger_layer_index))
1164
+
1165
+ # Ignore copy
1166
+ def forward(
1167
+ self,
1168
+ inputs_embeds,
1169
+ spatial_shapes,
1170
+ attention_mask: Optional[torch.Tensor] = None,
1171
+ output_attentions: Optional[bool] = None,
1172
+ output_hidden_states: Optional[bool] = None,
1173
+ position_embedding: Optional[list] = None,
1174
+ return_dict: Optional[bool] = None,
1175
+ ) -> Union[Tuple, BaseModelOutput]:
1176
+ r"""
1177
+ Args:
1178
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
1179
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
1180
+ This is useful if you want more control over how to convert `input_ids` indices into associated vectors
1181
+ than the model's internal embedding lookup matrix.
1182
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
1183
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
1184
+
1185
+ - 1 for tokens that are **not masked**,
1186
+ - 0 for tokens that are **masked**.
1187
+
1188
+ [What are attention masks?](../glossary#attention-mask)
1189
+ output_attentions (`bool`, *optional*):
1190
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
1191
+ returned tensors for more detail.
1192
+ output_hidden_states (`bool`, *optional*):
1193
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
1194
+ for more detail.
1195
+ return_dict (`bool`, *optional*):
1196
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
1197
+ """
1198
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1199
+ output_hidden_states = (
1200
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1201
+ )
1202
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1203
+
1204
+ encoder_states = () if output_hidden_states else None
1205
+ all_attentions = () if output_attentions else None
1206
+
1207
+ hidden_states = inputs_embeds
1208
+ new_attention_mask = attention_mask
1209
+ feature_x_list = []
1210
+ if position_embedding is None:
1211
+ cur_position_embedding = None
1212
+ else:
1213
+ position_embedding_idx = 0
1214
+ cur_position_embedding = position_embedding[position_embedding_idx]
1215
+ for encoder_layer in self.layers:
1216
+ if output_hidden_states:
1217
+ encoder_states = encoder_states + (hidden_states,)
1218
+ if self.gradient_checkpointing and self.training:
1219
+ layer_outputs = self._gradient_checkpointing_func(
1220
+ encoder_layer.__call__,
1221
+ hidden_states,
1222
+ spatial_shapes,
1223
+ new_attention_mask,
1224
+ cur_position_embedding,
1225
+ output_attentions,
1226
+ )
1227
+ else:
1228
+ layer_outputs = encoder_layer(
1229
+ hidden_states,
1230
+ spatial_shapes,
1231
+ new_attention_mask,
1232
+ cur_position_embedding,
1233
+ output_attentions=output_attentions,
1234
+ )
1235
+
1236
+ hidden_states = layer_outputs[0]
1237
+
1238
+ ## swin的情况
1239
+ if len(layer_outputs) > 2 and not output_attentions:
1240
+ spatial_shapes = layer_outputs[1]
1241
+ new_attention_mask = layer_outputs[2]
1242
+ feature_x = layer_outputs[-1]
1243
+ feature_x_list.append(feature_x)
1244
+ ## TODO:position_embedding
1245
+ if position_embedding is not None:
1246
+ position_embedding_idx += 1
1247
+ cur_position_embedding = position_embedding[position_embedding_idx]
1248
+ if output_attentions:
1249
+ all_attentions = all_attentions + (layer_outputs[1],)
1250
+
1251
+ if output_hidden_states:
1252
+ encoder_states = encoder_states + (hidden_states,)
1253
+
1254
+ if len(feature_x_list) > 0 and self.use_fused_layer:
1255
+ hidden_states = self.fused_layer(hidden_states, feature_x_list, spatial_shapes)
1256
+
1257
+ if not return_dict:
1258
+ return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
1259
+ return BaseModelOutput(
1260
+ last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
1261
+ )
1262
+
1263
+
1264
+ SIGLIP2_VISION_INPUTS_DOCSTRING = r"""
1265
+ Args:
1266
+ pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
1267
+ Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
1268
+ [`AutoImageProcessor`]. See [`CLIPImageProcessor.__call__`] for details.
1269
+ output_attentions (`bool`, *optional*):
1270
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
1271
+ tensors for more detail.
1272
+ output_hidden_states (`bool`, *optional*):
1273
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
1274
+ more detail.
1275
+ interpolate_pos_encoding (`bool`, *optional*, defaults to `False`):
1276
+ Whether to interpolate the pre-trained position encodings.
1277
+ return_dict (`bool`, *optional*):
1278
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
1279
+ """
1280
+
1281
+ class Rope2DPosEmb(nn.Module):
1282
+ """2D rotary position embedding with multi-resolution support.
1283
+ This class is intended to be used in the following way:
1284
+ 1. Before training, create an instance of Rope2DPosEmb. This instance will hold the precomputed cis.
1285
+ 2. Before each forward pass, call `get_freqs_cis_by_*` to get the `freqs_cis` tensor for this iteration.
1286
+ 3. During the forward pass, pass the `freqs_cis` tensor to each attention layer, and call `apply` just before each attention operation.
1287
+ The rope is shared across all attention layers and all heads.
1288
+ Refs:
1289
+ - RoFormer: https://arxiv.org/abs/2104.09864
1290
+ - VisionLLaMA: https://arxiv.org/abs/2403.00522
1291
+ - https://github.com/Meituan-AutoML/VisionLLaMA/blob/main/dit/models.py
1292
+ Args:
1293
+ dim (int): usually the multi-head attention dimension, should be divisible by 4 (TODO: relax this constraint if needed)
1294
+ max_height (int): the maximum height of the 2D grid
1295
+ max_width (int): the maximum width of the 2D grid
1296
+ theta_base (float): the base of the theta
1297
+ device (str): the device to store the precomputed cis
1298
+ """
1299
+
1300
+ def __init__(self, dim: int, max_height: int, max_width: int, theta_base=10000):
1301
+ super().__init__()
1302
+ self.dim = dim
1303
+ assert self.dim % 4 == 0, "dim must be divisible by 4"
1304
+ self.max_height = max_height
1305
+ self.max_width = max_width
1306
+ self.theta_base = theta_base
1307
+ self.freqs_cis = None
1308
+
1309
+ def _precompute_freqs_cis(self, max_height, max_width, device: torch.device) -> torch.Tensor:
1310
+ """Calculate the cis(freqs) for each position in the 2D grid.
1311
+ Return: complex tensor of shape (max_height, max_width, dim//2) and value:
1312
+ height axis: ret[h, w, 2*i] = cis(h * theta_base**(-4*i/dim))
1313
+ weight axis: ret[h, w, 2*i+1] = cis(w * theta_base**(-4*i/dim)) with (i in [0, dim//4))
1314
+ note: `cis` is a mathematical notation defined by cis x = cos x + i sin x,
1315
+ """
1316
+ N = max_height * max_width
1317
+ flat_pos = torch.arange(0, N).float().to(device)
1318
+ x_pos = flat_pos % self.max_width
1319
+ y_pos = flat_pos // self.max_width
1320
+ dim_range = (
1321
+ torch.arange(0, self.dim, 4)[: (self.dim // 4)].float().to(device)
1322
+ ) # C/4
1323
+ freqs = 1.0 / (self.theta_base ** (dim_range / self.dim))
1324
+ x_freqs = torch.outer(x_pos, freqs).float() # N, C/4
1325
+ y_freqs = torch.outer(y_pos, freqs).float() # N, C/4
1326
+ x_cis = torch.polar(torch.ones_like(x_freqs), x_freqs) # N, C/4
1327
+ y_cis = torch.polar(torch.ones_like(y_freqs), y_freqs) # N, C/4
1328
+ # N, C/4, 2
1329
+ freqs_cis = torch.cat(
1330
+ [x_cis.unsqueeze(dim=-1), y_cis.unsqueeze(dim=-1)], dim=-1
1331
+ )
1332
+ # max_height, max_width, C/2
1333
+ freqs_cis = freqs_cis.reshape(max_height, max_width, -1)
1334
+ return freqs_cis
1335
+
1336
+ def precompute_n_freqs_cis(self, merger_layer_num, device):
1337
+ max_height, max_width = self.max_height, self.max_width
1338
+ n_freqs_cis = []
1339
+ ori_freqs_cis = self._precompute_freqs_cis(max_height, max_width, device)
1340
+ n_freqs_cis.append(ori_freqs_cis)
1341
+ for i in range(merger_layer_num):
1342
+ max_height = max_height // 2
1343
+ max_width = max_width // 2
1344
+ freqs_cis = self._precompute_freqs_cis(max_height, max_width, device)
1345
+ n_freqs_cis.append(freqs_cis)
1346
+ return n_freqs_cis
1347
+
1348
+
1349
+ class Siglip2VisionTransformer(nn.Module):
1350
+ def __init__(self, config: Siglip2VisionConfig):
1351
+ super().__init__()
1352
+ config._attn_implementation = "sdpa" if not hasattr(config, "use_flash_attention_2") else "flash_attention_2"
1353
+ self._use_flash_attention_2 = config._attn_implementation == "flash_attention_2"
1354
+
1355
+ self.config = config
1356
+ embed_dim = config.hidden_size
1357
+ self.embeddings = Siglip2VisionEmbeddings(config)
1358
+ self.encoder = Siglip2Encoder(config)
1359
+ self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
1360
+ self.use_head = False if not hasattr(config, "vision_use_head") else config.vision_use_head
1361
+ # import pdb; pdb.set_trace()
1362
+ ############ 比较重要的改动 ############
1363
+ if hasattr(config, 'vision_config'):
1364
+ self.use_rope2d = False if 'use_rope2d' not in config.vision_config else config.vision_config['use_rope2d']
1365
+ if self.use_rope2d:
1366
+ self.rope2d = Rope2DPosEmb(embed_dim//config.num_attention_heads, 512, 512)
1367
+ else:
1368
+ self.use_rope2d = False if 'use_rope2d' not in config else config.use_rope2d
1369
+ if self.use_rope2d:
1370
+ self.rope2d = Rope2DPosEmb(embed_dim//config.num_attention_heads, 512, 512)
1371
+
1372
+ @add_start_docstrings_to_model_forward(SIGLIP2_VISION_INPUTS_DOCSTRING)
1373
+ @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=Siglip2VisionConfig)
1374
+ def forward(
1375
+ self,
1376
+ pixel_values,
1377
+ attention_mask: torch.Tensor,
1378
+ spatial_shapes: torch.LongTensor,
1379
+ output_attentions: Optional[bool] = None,
1380
+ output_hidden_states: Optional[bool] = None,
1381
+ return_dict: Optional[bool] = None,
1382
+ ) -> Union[Tuple, BaseModelOutputWithPooling]:
1383
+ r"""
1384
+ Returns:
1385
+
1386
+ """
1387
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1388
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1389
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1390
+
1391
+ hidden_states = self.embeddings(pixel_values, spatial_shapes)
1392
+ if attention_mask is not None and not self._use_flash_attention_2:
1393
+ # [batch_size, seq_len] -> [batch_size, 1, tgt_seq_len, src_seq_len]
1394
+ encoder_attention_mask = _prepare_4d_attention_mask(attention_mask, hidden_states.dtype)
1395
+ else:
1396
+ encoder_attention_mask = attention_mask.detach().to(dtype=torch.int32)
1397
+
1398
+ ### position_embedding ###
1399
+ if self.use_rope2d:
1400
+ if hasattr(self.config, 'vision_config'):
1401
+ position_embedding = self.rope2d.precompute_n_freqs_cis(len(self.config.vision_config['merger_layer_index']), hidden_states.device)
1402
+ else:
1403
+ position_embedding = self.rope2d.precompute_n_freqs_cis(len(self.config.merger_layer_index), hidden_states.device)
1404
+ else:
1405
+ position_embedding = None
1406
+
1407
+ encoder_outputs = self.encoder(
1408
+ inputs_embeds=hidden_states,
1409
+ spatial_shapes=spatial_shapes,
1410
+ attention_mask=encoder_attention_mask,
1411
+ output_attentions=output_attentions,
1412
+ output_hidden_states=output_hidden_states,
1413
+ position_embedding=position_embedding,
1414
+ return_dict=return_dict,
1415
+ )
1416
+ last_hidden_state = encoder_outputs[0]
1417
+ if isinstance(last_hidden_state, tuple):
1418
+ last_hidden_state, feature_x_list = last_hidden_state
1419
+ last_hidden_state = self.post_layernorm(last_hidden_state)
1420
+ pooled_output = self.head(last_hidden_state)
1421
+ last_hidden_state = (last_hidden_state, feature_x_list)
1422
+ else:
1423
+ last_hidden_state = self.post_layernorm(last_hidden_state)
1424
+ pooled_output = self.head(last_hidden_state)
1425
+
1426
+ if not return_dict:
1427
+ return (last_hidden_state, pooled_output, feature_x_list) + encoder_outputs[1:]
1428
+
1429
+ return BaseModelOutputWithPooling(
1430
+ last_hidden_state=last_hidden_state,
1431
+ pooler_output=pooled_output,
1432
+ hidden_states=encoder_outputs.hidden_states,
1433
+ attentions=encoder_outputs.attentions,
1434
+ )
1435
+
1436
+
1437
+ def _trunc_normal_(tensor, mean, std, a, b):
1438
+ # Cut & paste from PyTorch official master until it's in a few official releases - RW
1439
+ # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
1440
+ def norm_cdf(x):
1441
+ # Computes standard normal cumulative distribution function
1442
+ return (1.0 + math.erf(x / math.sqrt(2.0))) / 2.0
1443
+
1444
+ if (mean < a - 2 * std) or (mean > b + 2 * std):
1445
+ warnings.warn(
1446
+ "mean is more than 2 std from [a, b] in nn.init.trunc_normal_. "
1447
+ "The distribution of values may be incorrect.",
1448
+ stacklevel=2,
1449
+ )
1450
+
1451
+ # Values are generated by using a truncated uniform distribution and
1452
+ # then using the inverse CDF for the normal distribution.
1453
+ # Get upper and lower cdf values
1454
+ l = norm_cdf((a - mean) / std)
1455
+ u = norm_cdf((b - mean) / std)
1456
+
1457
+ # Uniformly fill tensor with values from [l, u], then translate to
1458
+ # [2l-1, 2u-1].
1459
+ tensor.uniform_(2 * l - 1, 2 * u - 1)
1460
+
1461
+ # Use inverse cdf transform for normal distribution to get truncated
1462
+ # standard normal
1463
+ tensor.erfinv_()
1464
+
1465
+ # Transform to proper mean, std
1466
+ tensor.mul_(std * math.sqrt(2.0))
1467
+ tensor.add_(mean)
1468
+
1469
+ # Clamp to ensure it's in the proper range
1470
+ tensor.clamp_(min=a, max=b)
1471
+
1472
+
1473
+ def trunc_normal_tf_(
1474
+ tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0, a: float = -2.0, b: float = 2.0
1475
+ ) -> torch.Tensor:
1476
+ """Fills the input Tensor with values drawn from a truncated
1477
+ normal distribution. The values are effectively drawn from the
1478
+ normal distribution :math:`\\mathcal{N}(\text{mean}, \text{std}^2)`
1479
+ with values outside :math:`[a, b]` redrawn until they are within
1480
+ the bounds. The method used for generating the random values works
1481
+ best when :math:`a \\leq \text{mean} \\leq b`.
1482
+
1483
+ NOTE: this 'tf' variant behaves closer to Tensorflow / JAX impl where the
1484
+ bounds [a, b] are applied when sampling the normal distribution with mean=0, std=1.0
1485
+ and the result is subsequently scaled and shifted by the mean and std args.
1486
+
1487
+ Args:
1488
+ tensor: an n-dimensional `torch.Tensor`
1489
+ mean: the mean of the normal distribution
1490
+ std: the standard deviation of the normal distribution
1491
+ a: the minimum cutoff value
1492
+ b: the maximum cutoff value
1493
+ """
1494
+ with torch.no_grad():
1495
+ _trunc_normal_(tensor, 0, 1.0, a, b)
1496
+ tensor.mul_(std).add_(mean)
1497
+
1498
+
1499
+ def variance_scaling_(tensor, scale=1.0, mode="fan_in", distribution="normal"):
1500
+ fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
1501
+ if mode == "fan_in":
1502
+ denom = fan_in
1503
+ elif mode == "fan_out":
1504
+ denom = fan_out
1505
+ elif mode == "fan_avg":
1506
+ denom = (fan_in + fan_out) / 2
1507
+
1508
+ variance = scale / denom
1509
+
1510
+ if distribution == "truncated_normal":
1511
+ # constant is stddev of standard normal truncated to (-2, 2)
1512
+ trunc_normal_tf_(tensor, std=math.sqrt(variance) / 0.87962566103423978)
1513
+ elif distribution == "normal":
1514
+ with torch.no_grad():
1515
+ tensor.normal_(std=math.sqrt(variance))
1516
+ elif distribution == "uniform":
1517
+ bound = math.sqrt(3 * variance)
1518
+ with torch.no_grad():
1519
+ tensor.uniform_(-bound, bound)
1520
+ else:
1521
+ raise ValueError(f"invalid distribution {distribution}")
1522
+
1523
+
1524
+ def lecun_normal_(tensor):
1525
+ variance_scaling_(tensor, mode="fan_in", distribution="truncated_normal")
1526
+
1527
+
1528
+ def default_flax_embed_init(tensor):
1529
+ variance_scaling_(tensor, mode="fan_in", distribution="normal")
1530
+
1531
+
1532
+ class Siglip2PreTrainedModel(PreTrainedModel):
1533
+ """
1534
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
1535
+ models.
1536
+ """
1537
+
1538
+ config_class = Siglip2Config
1539
+ base_model_prefix = "siglip2"
1540
+ supports_gradient_checkpointing = True
1541
+
1542
+ _no_split_modules = [
1543
+ "Siglip2TextEmbeddings",
1544
+ "Siglip2EncoderLayer",
1545
+ "Siglip2VisionEmbeddings",
1546
+ "Siglip2EncoderLayer",
1547
+ "Siglip2MultiheadAttentionPoolingHead",
1548
+ ]
1549
+ _supports_flash_attn_2 = True
1550
+ _supports_sdpa = True
1551
+
1552
+ def _init_weights(self, module):
1553
+ """Initialize the weights"""
1554
+ if isinstance(module, Siglip2VisionEmbeddings):
1555
+ width = self.config.hidden_size
1556
+ nn.init.normal_(module.position_embedding.weight, std=1 / np.sqrt(width))
1557
+ elif isinstance(module, nn.Embedding):
1558
+ default_flax_embed_init(module.weight)
1559
+ elif isinstance(module, Siglip2Attention):
1560
+ nn.init.xavier_uniform_(module.q_proj.weight)
1561
+ nn.init.xavier_uniform_(module.k_proj.weight)
1562
+ nn.init.xavier_uniform_(module.v_proj.weight)
1563
+ nn.init.xavier_uniform_(module.out_proj.weight)
1564
+ nn.init.zeros_(module.q_proj.bias)
1565
+ nn.init.zeros_(module.k_proj.bias)
1566
+ nn.init.zeros_(module.v_proj.bias)
1567
+ nn.init.zeros_(module.out_proj.bias)
1568
+ elif isinstance(module, Siglip2MLP):
1569
+ nn.init.xavier_uniform_(module.fc1.weight)
1570
+ nn.init.xavier_uniform_(module.fc2.weight)
1571
+ nn.init.normal_(module.fc1.bias, std=1e-6)
1572
+ nn.init.normal_(module.fc2.bias, std=1e-6)
1573
+ elif isinstance(module, (nn.Linear, nn.Conv2d)):
1574
+ lecun_normal_(module.weight)
1575
+ if module.bias is not None:
1576
+ nn.init.zeros_(module.bias)
1577
+ elif isinstance(module, nn.LayerNorm):
1578
+ module.bias.data.zero_()
1579
+ module.weight.data.fill_(1.0)
1580
+
1581
+
1582
+ class Siglip2VisionModel(Siglip2PreTrainedModel):
1583
+ config_class = Siglip2VisionConfig
1584
+ main_input_name = "pixel_values"
1585
+
1586
+ def __init__(self, config: Siglip2VisionConfig):
1587
+ super().__init__(config)
1588
+
1589
+ self.vision_model = Siglip2VisionTransformer(config)
1590
+
1591
+ # Initialize weights and apply final processing
1592
+ self.post_init()
1593
+
1594
+ def get_input_embeddings(self) -> nn.Module:
1595
+ return self.vision_model.embeddings.patch_embedding
1596
+
1597
+ @add_start_docstrings_to_model_forward(SIGLIP2_VISION_INPUTS_DOCSTRING)
1598
+ @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=Siglip2VisionConfig)
1599
+ def forward(
1600
+ self,
1601
+ pixel_values: torch.FloatTensor,
1602
+ pixel_attention_mask: torch.Tensor,
1603
+ spatial_shapes: torch.LongTensor,
1604
+ output_attentions: Optional[bool] = None,
1605
+ output_hidden_states: Optional[bool] = None,
1606
+ return_dict: Optional[bool] = None,
1607
+ ) -> Union[Tuple, BaseModelOutputWithPooling]:
1608
+ r"""
1609
+ Returns:
1610
+
1611
+ Examples:
1612
+
1613
+ ```python
1614
+ >>> from PIL import Image
1615
+ >>> import requests
1616
+ >>> from transformers import AutoProcessor, Siglip2VisionModel
1617
+
1618
+ >>> model = Siglip2VisionModel.from_pretrained("google/siglip2-base-patch16-224")
1619
+ >>> processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
1620
+
1621
+ >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
1622
+ >>> image = Image.open(requests.get(url, stream=True).raw)
1623
+
1624
+ >>> inputs = processor(images=image, return_tensors="pt")
1625
+
1626
+ >>> outputs = model(**inputs)
1627
+ >>> last_hidden_state = outputs.last_hidden_state
1628
+ >>> pooled_output = outputs.pooler_output # pooled features
1629
+ ```"""
1630
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1631
+
1632
+ return self.vision_model(
1633
+ pixel_values=pixel_values,
1634
+ attention_mask=pixel_attention_mask,
1635
+ spatial_shapes=spatial_shapes,
1636
+ output_attentions=output_attentions,
1637
+ output_hidden_states=output_hidden_states,
1638
+ return_dict=return_dict,
1639
+ )
1640
+
1641
+
1642
+ class SigLip2SwinVisionTower_ps8(nn.Module):
1643
+ def __init__(self, vision_tower, vision_tower_cfg, delay_load=False):
1644
+ super().__init__()
1645
+
1646
+ self.is_loaded = False
1647
+
1648
+ self.config = Siglip2VisionConfig()
1649
+
1650
+ self.vision_tower_name = vision_tower
1651
+
1652
+ self.image_processor = SigLipImageProcessor()
1653
+
1654
+ if not delay_load:
1655
+ rank0_print(f"Loading vision tower: {vision_tower}")
1656
+ self.load_model()
1657
+ elif getattr(vision_tower_cfg, "unfreeze_mm_vision_tower", False):
1658
+ # TODO: better detector is needed.
1659
+ rank0_print(f"The checkpoint seems to contain `vision_tower` weights: `unfreeze_mm_vision_tower`: True.")
1660
+ self.load_model()
1661
+ elif hasattr(vision_tower_cfg, "mm_tunable_parts") and "mm_vision_tower" in vision_tower_cfg.mm_tunable_parts:
1662
+ rank0_print(f"The checkpoint seems to contain `vision_tower` weights: `mm_tunable_parts` contains `mm_vision_tower`.")
1663
+ self.load_model()
1664
+ else:
1665
+ self.cfg_only = self.config
1666
+
1667
+ def load_model(self, device_map=None):
1668
+ if self.is_loaded:
1669
+ rank0_print("{} is already loaded, `load_model` called again, skipping.".format(self.vision_tower_name))
1670
+ return
1671
+
1672
+ #### ignore_mismatched_sizes=True ####
1673
+ self.vision_tower = Siglip2VisionModel.from_pretrained(self.vision_tower_name, device_map=device_map)
1674
+
1675
+ print('siglip2_naflex_swin')
1676
+ self.vision_tower.vision_model.head = nn.Identity()
1677
+ self._init_zero_merger_(self.vision_tower)
1678
+ self.vision_tower.requires_grad_(False)
1679
+ self.is_loaded = True
1680
+
1681
+ def _init_zero_merger_(self, model):
1682
+ """
1683
+ Initialize the merger layer.
1684
+ """
1685
+ for name, param in model.named_parameters():
1686
+ if "zero" in name and "merger" in name:
1687
+ param.data.zero_()
1688
+
1689
+ def forward(self, images, patch_sizes):
1690
+ if type(images) is list:
1691
+ image_list = []
1692
+ pixel_values = []
1693
+ pixel_attention_masks = []
1694
+ spatial_shapes = []
1695
+ max_length = max([patch_size[0] * patch_size[1] for patch_size in patch_sizes])
1696
+ encoder_patch_size = self.vision_tower.vision_model.embeddings.patch_size
1697
+ for image, spatial_shape in zip(images, patch_sizes):
1698
+ valid_pixel_num = spatial_shape[0] * spatial_shape[1]
1699
+ spatial_shape = torch.as_tensor(spatial_shape)[None]
1700
+ image = image.to(device=self.device, dtype=self.dtype).unsqueeze(0)
1701
+ pixel_value = rearrange(image, 'b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1=encoder_patch_size, p2=encoder_patch_size)
1702
+ # b, n, c
1703
+ padding_pixel = torch.zeros_like(pixel_value)[:, :1]
1704
+ pixel_value = torch.cat([pixel_value, padding_pixel.repeat(1, max_length - valid_pixel_num, 1)], dim=1)
1705
+ pixel_attention_mask = torch.zeros_like(pixel_value[:, :, 0])
1706
+ pixel_attention_mask[:valid_pixel_num, :valid_pixel_num] = 1
1707
+
1708
+ image_list.append(image)
1709
+ pixel_values.append(pixel_value)
1710
+ pixel_attention_masks.append(pixel_attention_mask)
1711
+ spatial_shapes.append(spatial_shape)
1712
+
1713
+ pixel_values = torch.cat(pixel_values)
1714
+ pixel_attention_masks = torch.cat(pixel_attention_masks)
1715
+ spatial_shapes = torch.cat(spatial_shapes)
1716
+
1717
+ image_forward_outs = self.vision_tower(image_list,
1718
+ pixel_attention_mask=pixel_attention_masks,
1719
+ spatial_shapes=spatial_shapes,
1720
+ output_hidden_states=True)
1721
+
1722
+ if isinstance(image_forward_outs.last_hidden_state, tuple):
1723
+ image_features, fused_features = image_forward_outs.last_hidden_state
1724
+ image_features = image_features.to(pixel_values.dtype)
1725
+ image_features = image_features.split(1)
1726
+ image_features = list(zip(image_features, fused_features))
1727
+ return image_features
1728
+ else:
1729
+ image_features = image_forward_outs.last_hidden_state.to(pixel_values.dtype)
1730
+ image_features = image_features.split(1)
1731
+ # 应该为list
1732
+
1733
+ else:
1734
+ print('no support for paralla')
1735
+ exit()
1736
+ image_forward_outs = self.vision_tower(images.to(device=self.device, dtype=self.dtype),spatial_shapes=patch_sizes, output_hidden_states=True)
1737
+ image_features = image_forward_outs.last_hidden_state.to(images.dtype)
1738
+ # image_features = image_forward_outs.hidden_states[-2].to(images.dtype)
1739
+
1740
+ return image_features
1741
+
1742
+ @property
1743
+ def dummy_feature(self):
1744
+ return torch.zeros(1, self.hidden_size, device=self.device, dtype=self.dtype)
1745
+
1746
+ @property
1747
+ def dtype(self):
1748
+ for p in self.vision_tower.parameters():
1749
+ return p.dtype
1750
+
1751
+ @property
1752
+ def device(self):
1753
+ for p in self.vision_tower.parameters():
1754
+ return p.device
1755
+
1756
+ @property
1757
+ def hidden_size(self):
1758
+ return self.config.hidden_size
1759
+
1760
+ @property
1761
+ def num_patches(self):
1762
+ return (self.config.image_size // self.config.patch_size) ** 2
1763
+
1764
+ @property
1765
+ def num_patches_per_side(self):
1766
+ return self.config.image_size // self.config.patch_size
1767
+ # return self.model_config["vision_cfg"]["image_size"] // self.model_config["vision_cfg"]["patch_size"]
1768
+
1769
+ @property
1770
+ def image_size(self):
1771
+ return self.config.image_size
1772
+
1773
+
1774
+ # 做个测试吧
VLMEvalKit-sudoku/scripts/visualize.ipynb ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "import json\n",
10
+ "import copy as cp\n",
11
+ "import numpy as np\n",
12
+ "import matplotlib.pyplot as plt\n",
13
+ "import matplotlib.font_manager as fm\n",
14
+ "\n",
15
+ "def download_file(url, filename=None):\n",
16
+ " from urllib.request import urlretrieve\n",
17
+ " if filename is None:\n",
18
+ " filename = url.split('/')[-1]\n",
19
+ " urlretrieve(url, filename)\n",
20
+ "\n",
21
+ "font_URL = 'http://opencompass.openxlab.space/utils/Fonts/segoepr.ttf'\n",
22
+ "download_file(font_URL)\n",
23
+ "\n",
24
+ "font12 = fm.FontProperties(fname='segoepr.ttf', size=12)\n",
25
+ "font15 = fm.FontProperties(fname='segoepr.ttf', size=15, weight='bold')\n",
26
+ "font18 = fm.FontProperties(fname='segoepr.ttf', size=18, weight='bold')\n",
27
+ "\n",
28
+ "DATA_URL = 'http://opencompass.openxlab.space/utils/OpenVLM.json'\n",
29
+ "download_file(DATA_URL)"
30
+ ]
31
+ },
32
+ {
33
+ "cell_type": "code",
34
+ "execution_count": null,
35
+ "metadata": {},
36
+ "outputs": [],
37
+ "source": [
38
+ "def pre_normalize(raw_data, labels):\n",
39
+ " data_list = cp.deepcopy(raw_data)\n",
40
+ " minimum, maximum, max_range, range_map = {}, {}, 0, {}\n",
41
+ " for lb in labels:\n",
42
+ " minimum[lb] = min([x[lb] for x in data_list])\n",
43
+ " maximum[lb] = max([x[lb] for x in data_list])\n",
44
+ " max_range = max(max_range, maximum[lb] - minimum[lb])\n",
45
+ " max_range *= 1.25\n",
46
+ " for lb in labels:\n",
47
+ " mid = (minimum[lb] + maximum[lb]) / 2\n",
48
+ " new_range = (mid - max_range / 2, mid + max_range / 2) if (mid + max_range / 2) < 100 else (100 - max_range, 100)\n",
49
+ " range_map[lb] = new_range\n",
50
+ " for item in data_list:\n",
51
+ " assert new_range[0] <= item[lb] <= new_range[1]\n",
52
+ " item[lb] = (item[lb] - new_range[0]) / max_range * 100\n",
53
+ " return data_list, range_map\n",
54
+ "\n",
55
+ "# solve the problem that some benchmark score is too high and out of range\n",
56
+ "def log_normalize(raw_data, labels):\n",
57
+ " data_list = cp.deepcopy(raw_data)\n",
58
+ " minimum, maximum, max_range, range_map = {}, {}, 0, {}\n",
59
+ " for lb in labels:\n",
60
+ " minimum[lb] = min([np.log(x[lb]) for x in data_list])\n",
61
+ " maximum[lb] = max([np.log(x[lb]) for x in data_list])\n",
62
+ " max_range = max(max_range, maximum[lb] - minimum[lb])\n",
63
+ " max_range *= 1.005\n",
64
+ " for lb in labels:\n",
65
+ " mid = (minimum[lb] + maximum[lb]) / 2\n",
66
+ " new_range = (mid - max_range / 2, mid + max_range / 2) if (mid + max_range / 2) < 100 else (100 - max_range, 100)\n",
67
+ " range_map[lb] = new_range\n",
68
+ " for item in data_list:\n",
69
+ " assert new_range[0] <= np.log(item[lb]) <= new_range[1]\n",
70
+ " item[lb] = (np.log(item[lb]) - new_range[0]) / max_range * 100\n",
71
+ " return data_list, range_map"
72
+ ]
73
+ },
74
+ {
75
+ "cell_type": "code",
76
+ "execution_count": null,
77
+ "metadata": {},
78
+ "outputs": [],
79
+ "source": [
80
+ "# Draw MMBench Radar Graph\n",
81
+ "data = json.loads(open('OpenVLM.json').read())['results']\n",
82
+ "models = list(data)\n",
83
+ "print(models)\n",
84
+ "\n",
85
+ "# model2vis = [\n",
86
+ "# 'GPT-4v (detail: low)', 'GeminiProVision', 'Qwen-VL-Plus', \n",
87
+ "# 'InternLM-XComposer2-VL', 'LLaVA-v1.5-13B', 'CogVLM-17B-Chat',\n",
88
+ "# 'mPLUG-Owl2', 'Qwen-VL-Chat', 'IDEFICS-80B-Instruct'\n",
89
+ "# ]\n",
90
+ "\n",
91
+ "model2vis = [\n",
92
+ " # 'GPT-4v (detail: low)', 'GeminiProVision', 'InternLM-XComposer2-VL', \n",
93
+ " 'GPT-4v (1106, detail-low)', 'Gemini-1.0-Pro', 'Gemini-1.5-Pro', #'Gemini-1.5-Flash', 'Qwen-VL-Plus', \n",
94
+ " 'InternLM-XComposer2', 'LLaVA-v1.5-13B', 'CogVLM-17B-Chat',\n",
95
+ " 'mPLUG-Owl2', 'Qwen-VL-Chat', 'IDEFICS-80B-Instruct'\n",
96
+ "]\n",
97
+ "\n",
98
+ "colors = [\n",
99
+ " '#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', \n",
100
+ " '#e377c2', '#7f7f7f', '#bcbd22'\n",
101
+ "]"
102
+ ]
103
+ },
104
+ {
105
+ "cell_type": "code",
106
+ "execution_count": null,
107
+ "metadata": {},
108
+ "outputs": [],
109
+ "source": [
110
+ "from collections import defaultdict\n",
111
+ "\n",
112
+ "split = 'MMBench_TEST_EN'\n",
113
+ "# data_sub = {k: v[split] for k, v in data.items()}\n",
114
+ "data_sub = {k: defaultdict(int, v)[split] for k, v in data.items()}\n",
115
+ "# solve the problem that some model lack the evaluation of MMBench_TEST_EN\n",
116
+ "\n",
117
+ "labels = list(data_sub[model2vis[0]])\n",
118
+ "labels.remove('Overall')\n",
119
+ "num_vars = len(labels)\n",
120
+ "\n",
121
+ "raw_data = [data_sub[m] for m in model2vis]\n",
122
+ "data_list, range_map = pre_normalize(raw_data, labels)\n",
123
+ "\n",
124
+ "alpha = 0.25\n",
125
+ "angles = np.linspace(0, 2 * np.pi, num_vars, endpoint=False).tolist()\n",
126
+ "angles_deg = np.linspace(0, 360, num_vars, endpoint=False).tolist()\n",
127
+ "fig, ax_base = plt.subplots(nrows=1, ncols=1, figsize=(10, 10), subplot_kw=dict(polar=True))\n",
128
+ "\n",
129
+ "for i in range(len(data_list)):\n",
130
+ " item = data_list[i]\n",
131
+ " model_name = model2vis[i]\n",
132
+ " color = colors[i]\n",
133
+ " tmp_angles = angles[:] + [angles[0]]\n",
134
+ " tmp_values = [item[lb] for lb in labels] + [item[labels[0]]]\n",
135
+ " ax_base.plot(tmp_angles, tmp_values, color=color, linewidth=1, linestyle='solid', label=model_name)\n",
136
+ " ax_base.fill(tmp_angles, tmp_values, color=color, alpha=alpha)\n",
137
+ " \n",
138
+ "angles += [angles[0]]\n",
139
+ "ax_base.set_ylim(0, 100)\n",
140
+ "ax_base.set_yticks([40, 60, 80, 100])\n",
141
+ "ax_base.set_yticklabels([''] * 4)\n",
142
+ "\n",
143
+ "ax_base.tick_params(pad=25)\n",
144
+ "ax_base.set_xticks(angles[:-1])\n",
145
+ "ax_base.set_xticklabels(labels, fontproperties=font18)\n",
146
+ "\n",
147
+ "leg = ax_base.legend(loc='center right', bbox_to_anchor=(1.6, 0.5), prop=font15, ncol=1, frameon=True, labelspacing=1.2)\n",
148
+ "for line in leg.get_lines():\n",
149
+ " line.set_linewidth(2.5)\n",
150
+ "\n",
151
+ "cx, cy, sz = 0.44, 0.435, 0.34\n",
152
+ "axes = [fig.add_axes([cx - sz, cy - sz, cx + sz, cy + sz], projection='polar', label='axes%d' % i) for i in range(num_vars)]\n",
153
+ " \n",
154
+ "for ax, angle, label in zip(axes, angles_deg, labels):\n",
155
+ " ax.patch.set_visible(False)\n",
156
+ " ax.grid(False)\n",
157
+ " ax.xaxis.set_visible(False)\n",
158
+ " cur_range = range_map[label]\n",
159
+ " label_list = [cur_range[0] + (cur_range[1] - cur_range[0]) / 5 * i for i in range(2, 6)]\n",
160
+ " label_list = [f'{x:.1f}' for x in label_list]\n",
161
+ " ax.set_rgrids(range(40, 120, 20), angle=angle, labels=label_list, font_properties=font12)\n",
162
+ " ax.spines['polar'].set_visible(False)\n",
163
+ " ax.set_ylim(0, 100)\n",
164
+ "\n",
165
+ "title_text = f'{len(model2vis)} Representative VLMs on MMBench Test.'\n",
166
+ "plt.figtext(.7, .95, title_text, fontproperties=font18, ha='center')\n",
167
+ "plt.show()"
168
+ ]
169
+ },
170
+ {
171
+ "cell_type": "code",
172
+ "execution_count": null,
173
+ "metadata": {},
174
+ "outputs": [],
175
+ "source": [
176
+ "labels = ['SEEDBench_IMG', 'CCBench', 'MMBench_TEST_EN', 'MMBench_TEST_CN', 'MME', 'MMVet', 'MMMU_VAL', 'MathVista', 'HallusionBench', 'LLaVABench']\n",
177
+ "num_vars = len(labels)\n",
178
+ "\n",
179
+ "raw_data = [{k: data[m][k]['Overall'] for k in labels} for m in model2vis]\n",
180
+ "data_list, range_map = pre_normalize(raw_data, labels)\n",
181
+ "\n",
182
+ "alpha = 0.25\n",
183
+ "angles = np.linspace(0, 2 * np.pi, num_vars, endpoint=False).tolist()\n",
184
+ "angles_deg = np.linspace(0, 360, num_vars, endpoint=False).tolist()\n",
185
+ "fig, ax_base = plt.subplots(nrows=1, ncols=1, figsize=(10, 10), subplot_kw=dict(polar=True))\n",
186
+ "\n",
187
+ "for i in range(len(data_list)):\n",
188
+ " item = data_list[i]\n",
189
+ " model_name = model2vis[i]\n",
190
+ " color = colors[i]\n",
191
+ " tmp_angles = angles[:] + [angles[0]]\n",
192
+ " tmp_values = [item[lb] for lb in labels] + [item[labels[0]]]\n",
193
+ " ax_base.plot(tmp_angles, tmp_values, color=color, linewidth=1, linestyle='solid', label=model_name)\n",
194
+ " ax_base.fill(tmp_angles, tmp_values, color=color, alpha=alpha)\n",
195
+ " \n",
196
+ "angles += [angles[0]]\n",
197
+ "ax_base.set_ylim(0, 100)\n",
198
+ "ax_base.set_yticks([40, 60, 80, 100])\n",
199
+ "ax_base.set_yticklabels([''] * 4)\n",
200
+ "\n",
201
+ "ax_base.tick_params(pad=15)\n",
202
+ "ax_base.set_xticks(angles[:-1])\n",
203
+ "ax_base.set_xticklabels(labels, fontproperties=font18)\n",
204
+ "\n",
205
+ "dataset_map = {\n",
206
+ " 'MMBench_TEST_EN': 'MMBench (Test)', \n",
207
+ " 'MMBench_TEST_CN': 'MMBenchCN (Test)', \n",
208
+ " 'MathVista': 'MathVista (TestMini)', \n",
209
+ " 'MMMU_VAL': 'MMMU (Val)'\n",
210
+ "}\n",
211
+ "for i, label in enumerate(ax_base.get_xticklabels()):\n",
212
+ " x,y = label.get_position()\n",
213
+ " text = label.get_text()\n",
214
+ " text = dataset_map[text] if text in dataset_map else text\n",
215
+ " lab = ax_base.text(x, y, text, transform=label.get_transform(),\n",
216
+ " ha=label.get_ha(), va=label.get_va(), font_properties=font15)\n",
217
+ " lab.set_rotation(360 / num_vars * i + 270)\n",
218
+ " labels.append(lab)\n",
219
+ "ax_base.set_xticklabels([])\n",
220
+ "\n",
221
+ "leg = ax_base.legend(loc='center right', bbox_to_anchor=(1.6, 0.5), prop=font15, ncol=1, frameon=True, labelspacing=1.2)\n",
222
+ "for line in leg.get_lines():\n",
223
+ " line.set_linewidth(2.5)\n",
224
+ "\n",
225
+ "cx, cy, sz = 0.44, 0.435, 0.34\n",
226
+ "axes = [fig.add_axes([cx - sz, cy - sz, cx + sz, cy + sz], projection='polar', label='axes%d' % i) for i in range(num_vars)]\n",
227
+ " \n",
228
+ "for ax, angle, label in zip(axes, angles_deg, labels):\n",
229
+ " ax.patch.set_visible(False)\n",
230
+ " ax.grid(False)\n",
231
+ " ax.xaxis.set_visible(False)\n",
232
+ " cur_range = range_map[label]\n",
233
+ " label_list = [cur_range[0] + (cur_range[1] - cur_range[0]) / 5 * i for i in range(2, 6)]\n",
234
+ " label_list = [f'{x:.1f}' for x in label_list]\n",
235
+ " ax.set_rgrids(range(40, 120, 20), angle=angle, labels=label_list, font_properties=font12)\n",
236
+ " ax.spines['polar'].set_visible(False)\n",
237
+ " ax.set_ylim(0, 100)\n",
238
+ "\n",
239
+ "title_text = f'{len(model2vis)} Representative VLMs on {num_vars} Benchmarks in OpenCompass Multi-Modal Leaderboard.'\n",
240
+ "plt.figtext(.7, .95, title_text, fontproperties=font18, ha='center')\n",
241
+ "plt.show()"
242
+ ]
243
+ }
244
+ ],
245
+ "metadata": {
246
+ "kernelspec": {
247
+ "display_name": "base",
248
+ "language": "python",
249
+ "name": "python3"
250
+ },
251
+ "language_info": {
252
+ "codemirror_mode": {
253
+ "name": "ipython",
254
+ "version": 3
255
+ },
256
+ "file_extension": ".py",
257
+ "mimetype": "text/x-python",
258
+ "name": "python",
259
+ "nbconvert_exporter": "python",
260
+ "pygments_lexer": "ipython3",
261
+ "version": "3.8.5"
262
+ }
263
+ },
264
+ "nbformat": 4,
265
+ "nbformat_minor": 2
266
+ }
VLMEvalKit-sudoku/vlmeval/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (473 Bytes). View file
 
VLMEvalKit-sudoku/vlmeval/__pycache__/config.cpython-310.pyc ADDED
Binary file (35.8 kB). View file
 
VLMEvalKit-sudoku/vlmeval/__pycache__/inference_mt.cpython-310.pyc ADDED
Binary file (5.76 kB). View file
 
VLMEvalKit-sudoku/vlmeval/__pycache__/inference_video.cpython-310.pyc ADDED
Binary file (7.76 kB). View file
 
VLMEvalKit-sudoku/vlmeval/api/bluelm_api.py ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from vlmeval.smp import *
2
+ from vlmeval.api.base import BaseAPI
3
+ from typing import Iterable, List
4
+ import os
5
+ import re
6
+ import json
7
+
8
+
9
+ def split_think(text: str) -> str:
10
+ """
11
+ 提取think后的内容
12
+ """
13
+ if "</think>" in text:
14
+ answer = text.split("</think>")[1]
15
+ else:
16
+ if "<think>" in text:
17
+ return 'Thinking mode too long to extract answer'
18
+ return text
19
+ return answer
20
+
21
+
22
+ def remove_boxed(s:str):
23
+ left = '\\boxed{'
24
+ try:
25
+ assert s[:len(left)] == left
26
+ assert s[-1] == '}'
27
+ return s[len(left):-1]
28
+ except Exception:
29
+ return None
30
+
31
+
32
+ def last_boxed_only_string(string:str):
33
+ idx = string.rfind('\\boxed')
34
+ if idx < 0:
35
+ idx = string.rfind('\\fbox')
36
+ if idx < 0:
37
+ return None
38
+
39
+ i = idx
40
+ right_brace_idx = None
41
+ num_left_braces_open = 0
42
+ while i < len(string):
43
+ if string[i] == '{':
44
+ num_left_braces_open += 1
45
+ if string[i] == '}':
46
+ num_left_braces_open -= 1
47
+ if num_left_braces_open == 0:
48
+ right_brace_idx = i
49
+ break
50
+ i += 1
51
+
52
+ if right_brace_idx is None:
53
+ retval = None
54
+ else:
55
+ retval = string[idx:right_brace_idx + 1]
56
+
57
+ return retval
58
+
59
+
60
+ def extract_boxed(pred_str:str, strip_double_curly_brace=False):
61
+ boxed_str = last_boxed_only_string(pred_str)
62
+ if boxed_str is None:
63
+ return pred_str # 返回原始字符串
64
+ answer = remove_boxed(boxed_str)
65
+ if answer is None:
66
+ return pred_str # 返回原始字符串
67
+ if strip_double_curly_brace:
68
+ match = re.match('^\{(.*)\}$', answer) # noqa: W605
69
+ if match:
70
+ answer = match.group(1)
71
+ return answer
72
+
73
+
74
+ def extract_boxed_answer(pred_str:str):
75
+ if pred_str.rfind('\\boxed') < 0 and pred_str.rfind('\\fbox') < 0:
76
+ return pred_str
77
+ return extract_boxed(pred_str, strip_double_curly_brace=True)
78
+
79
+
80
+ def get_streaming_response(response: requests.Response):
81
+ for chunk in response.iter_lines(chunk_size=4096,
82
+ decode_unicode=False):
83
+ if chunk:
84
+ data = json.loads(chunk.decode("utf-8"))
85
+ output = data.get("result")
86
+ yield output
87
+
88
+
89
+ def multimodal(images, text, url, key, temperature=0.6, max_tokens=32768, top_k=20, top_p=0.95, stream=True, history=[], timeout=60): # noqa: E501
90
+ if images:
91
+ pics = []
92
+ for image in images:
93
+ with open(image, 'rb') as f:
94
+ pic = base64.b64encode(f.read()).decode('utf-8')
95
+ pics.append(pic)
96
+ data = {
97
+ 'images': pics, 'text': text, 'key': key, 'temperature': temperature,
98
+ 'max_tokens': max_tokens, 'top_k': top_k, 'top_p': top_p, 'stream': stream
99
+ }
100
+ else:
101
+ data = {
102
+ 'text': text, 'key': key, 'temperature': temperature,
103
+ 'max_tokens': max_tokens, 'top_k': top_k, 'top_p': top_p, 'stream': stream
104
+ }
105
+ response = requests.post(url, json=data, headers={"Content-Type": "application/json"}, timeout=timeout)
106
+ if stream:
107
+ final_text = ''
108
+ for h in get_streaming_response(response):
109
+ final_text = h
110
+ else:
111
+ response_data = response.json()
112
+ final_text = response_data.get("result", "")
113
+ return final_text
114
+
115
+
116
+ class BlueLMWrapper(BaseAPI):
117
+ is_api: bool = True
118
+
119
+ def __init__(self,
120
+ model: str = 'BlueLM-2.5-3B',
121
+ retry: int = 5,
122
+ verbose: bool = True,
123
+ temperature: float = 0.6,
124
+ system_prompt: str = None,
125
+ max_tokens: int = 32768,
126
+ top_k: int = 20,
127
+ top_p: float = 0.95,
128
+ timeout: int = 60,
129
+ key: str = None,
130
+ url: str = 'http://api-ai.vivo.com.cn/multimodal',
131
+ **kwargs):
132
+
133
+ self.model = model
134
+ self.fail_msg = 'Failed to obtain answer BlueLM API. '
135
+ self.max_tokens = max_tokens
136
+ self.temperature = temperature
137
+ self.top_k = top_k
138
+ self.top_p = top_p
139
+ self.url = url
140
+ self.key = key
141
+ self.timeout = timeout
142
+
143
+ if self.key is None:
144
+ self.key = os.environ.get('BLUELM_API_KEY', None)
145
+ assert self.key is not None, (
146
+ 'Please set the API Key (obtain it here: '
147
+ 'contact by email : shuai.ren@vivo.com'
148
+ )
149
+
150
+ super().__init__(retry=retry, system_prompt=system_prompt, verbose=verbose, **kwargs)
151
+
152
+ def message_to_promptimg(self, message, dataset=None):
153
+
154
+ num_images = len([x for x in message if x['type'] == 'image'])
155
+ if num_images == 0:
156
+ prompt = '\n'.join([x['value'] for x in message if x['type'] == 'text'])
157
+ image = None
158
+ elif num_images == 1:
159
+ prompt = '\n'.join([x['value'] for x in message if x['type'] == 'text'])
160
+ image = [x['value'] for x in message if x['type'] == 'image']
161
+ else:
162
+ prompt = '\n'.join([x['value'] if x['type'] == 'text' else '<im_start><image><im_end>' for x in message])
163
+ if dataset == 'BLINK':
164
+ image = concat_images_vlmeval(
165
+ [x['value'] for x in message if x['type'] == 'image'],
166
+ target_size=512)
167
+ else:
168
+ image = [x['value'] for x in message if x['type'] == 'image']
169
+
170
+ if dataset in ['MMBench_DEV_EN_V11', 'MMBench_DEV_CN_V11', 'MMBench_TEST_EN_V11', 'MMBench_TEST_CN_V11',
171
+ 'AI2D_TEST', 'AI2D_TEST_TO_MASK', 'MMMU_DEV_VAL', 'MMStar']:
172
+ prompt = prompt.replace('Please select the correct answer from the options above.',
173
+ 'Answer with the option’s letter from the given choices directly.')
174
+ prompt = prompt.replace('Question: Hint: Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end.\n','') # noqa: E501
175
+ elif dataset in ['ChartQA_TEST']:
176
+ prompt = prompt.replace('Answer the question using a single word or phrase.',
177
+ 'Answer the question using a single number or phrase.')
178
+ elif dataset in ['DocVQA_VAL', 'DocVQA_TEST', ]:
179
+ prompt = prompt.replace('Answer the question using a single word or phrase.',
180
+ 'Give the short answer directly.')
181
+ elif dataset in ['TextVQA_VAL']:
182
+ prompt = prompt.replace('Answer the question using a single word or phrase.',
183
+ 'When the provided information is insufficient, respond with ’Unanswerable’.'
184
+ 'Answer the question using a single word or phrase.')
185
+ elif dataset in ['MTVQA_TEST']:
186
+ prompt = prompt.replace(
187
+ '\nAnswer the question using a word or phrase in the language of the question.', '')
188
+ elif dataset in ['MathVista_MINI']:
189
+ if 'Choices:' in prompt:
190
+ prompt = prompt.replace('Choices:', 'Options:').replace('Hint:', 'Context:')
191
+ for i in range(1, 7): # replace A ~ F
192
+ prompt = prompt.replace(f'({chr(64 + i)})', f'{chr(64 + i)}.')
193
+ prompt += '\nAnswer with the option’s letter from the given choices directly.'
194
+ else:
195
+ prompt += '\nAnswer the question using a single word or phrase.'
196
+ elif dataset in ['HallusionBench']:
197
+ prompt = prompt + " Please answer yes or no."
198
+ return prompt, image
199
+
200
+ def generate_inner(self, inputs, **kwargs) -> str:
201
+
202
+ assert isinstance(inputs, str) or isinstance(inputs, list)
203
+ pure_text = np.all([x['type'] == 'text' for x in inputs])
204
+ assert not pure_text
205
+
206
+ prompt, image_path = self.message_to_promptimg(inputs, kwargs['dataset'])
207
+
208
+ try:
209
+ response = multimodal(
210
+ images=image_path, text=prompt, url=self.url, key=self.key, temperature=self.temperature,
211
+ max_tokens=self.max_tokens, top_k=self.top_k, top_p=self.top_p, timeout=self.timeout)
212
+ if kwargs['dataset'] in [
213
+ 'MMBench_DEV_EN_V11', 'MMBench_DEV_CN_V11', 'MMBench_TEST_EN_V11', 'MMBench_TEST_CN_V11',
214
+ 'AI2D_TEST', 'AI2D_TEST_TO_MASK', 'MMMU_DEV_VAL', 'MMStar',
215
+ 'OCRBench', 'MMVet', 'MathVista_MINI', 'HallusionBench'
216
+ ]:
217
+
218
+ answer = split_think(response[0])
219
+ answer = extract_boxed_answer(answer)
220
+ else:
221
+ answer = split_think(response[0])
222
+ self.logger.info(f'answer : {answer}')
223
+ return 0, answer, 'Succeeded! '
224
+ except Exception as err:
225
+ if self.verbose:
226
+ self.logger.error(f'{type(err)}: {err}')
227
+ self.logger.error(f'The input messages are {inputs}.')
228
+ return -1, '', ''
229
+
230
+
231
+ class BlueLM_API(BlueLMWrapper):
232
+
233
+ def generate(self, message, dataset=None):
234
+ return super(BlueLM_API, self).generate(message, dataset=dataset)
VLMEvalKit-sudoku/vlmeval/api/doubao_vl_api.py ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from vlmeval.smp import *
2
+ import os
3
+ import sys
4
+ from vlmeval.api.base import BaseAPI
5
+ import math
6
+ from vlmeval.dataset import DATASET_TYPE
7
+ from vlmeval.dataset import img_root_map
8
+ from io import BytesIO
9
+ import pandas as pd
10
+ import requests
11
+ import json
12
+ import base64
13
+ import time
14
+ from openai import OpenAI
15
+
16
+
17
+ class DoubaoVLWrapper(BaseAPI):
18
+
19
+ is_api: bool = True
20
+
21
+ def __init__(self,
22
+ model: str = '',
23
+ retry: int = 5,
24
+ verbose: bool = True,
25
+ system_prompt: str = None,
26
+ temperature: float = 0,
27
+ timeout: int = 60,
28
+ max_tokens: int = 4096,
29
+ api_base: str = 'https://ark.cn-beijing.volces.com/api/v3', # 使用系统推荐的服务区域地址
30
+ **kwargs):
31
+
32
+ self.model = model # This variable is unused
33
+ self.cur_idx = 0
34
+ self.fail_msg = 'Failed to obtain answer via API. '
35
+ self.temperature = temperature
36
+ self.max_tokens = max_tokens
37
+
38
+ assert 'DOUBAO_VL_KEY' in os.environ, 'You may need to set the env variable DOUBAO_VL_KEY to use DOUBAO_VL.'
39
+
40
+ key = os.environ.get('DOUBAO_VL_KEY', None)
41
+ assert key is not None, 'Please set the environment variable DOUBAO_VL_KEY. '
42
+ self.key = key
43
+
44
+ assert api_base is not None, 'Please set the variable API_BASE. '
45
+ self.api_base = api_base
46
+ self.timeout = timeout
47
+
48
+ super().__init__(retry=retry, system_prompt=system_prompt, verbose=verbose, **kwargs)
49
+
50
+ # Models that require an EP
51
+ # assert self.model in ['Doubao-1.5-vision-pro', 'doubao-1-5-thinking-vision-pro-250428']
52
+ EP_KEY = 'DOUBAO_VL_ENDPOINT' + '_' + self.model.replace('.', '_').replace('-', '_').upper()
53
+ endpoint = os.getenv(EP_KEY, None)
54
+
55
+ if endpoint is not None:
56
+ self.endpoint = endpoint
57
+ else:
58
+ self.logger.warning(
59
+ f'Endpoint for model {model} is not set (can be set w. environment var {EP_KEY}. '
60
+ f'By default, we will use the model name {model} as the EP if not set. '
61
+ )
62
+ self.endpoint = model
63
+
64
+ self.client = OpenAI(
65
+ api_key=self.key,
66
+ base_url=self.api_base,
67
+ timeout=self.timeout
68
+ )
69
+
70
+ self.logger.info(f'Using API Base: {self.api_base}; End Point: {self.endpoint}; API Key: {self.key}')
71
+
72
+ def dump_image(self, line, dataset):
73
+ """Dump the image(s) of the input line to the corresponding dataset folder.
74
+
75
+ Args:
76
+ line (line of pd.DataFrame): The raw input line.
77
+ dataset (str): The name of the dataset.
78
+
79
+ Returns:
80
+ str | list[str]: The paths of the dumped images.
81
+ """
82
+ ROOT = LMUDataRoot()
83
+ assert isinstance(dataset, str)
84
+
85
+ img_root = os.path.join(ROOT, 'images', img_root_map(dataset) if dataset in img_root_map(dataset) else dataset)
86
+ os.makedirs(img_root, exist_ok=True)
87
+ if 'image' in line:
88
+ if isinstance(line['image'], list):
89
+ tgt_path = []
90
+ assert 'image_path' in line
91
+ for img, im_name in zip(line['image'], line['image_path']):
92
+ path = osp.join(img_root, im_name)
93
+ if not read_ok(path):
94
+ decode_base64_to_image_file(img, path)
95
+ tgt_path.append(path)
96
+ else:
97
+ tgt_path = osp.join(img_root, f"{line['index']}.jpg")
98
+ if not read_ok(tgt_path):
99
+ decode_base64_to_image_file(line['image'], tgt_path)
100
+ tgt_path = [tgt_path]
101
+ else:
102
+ assert 'image_path' in line
103
+ tgt_path = toliststr(line['image_path'])
104
+
105
+ return tgt_path
106
+
107
+ def use_custom_prompt(self, dataset_name):
108
+ if dataset_name == 'MathVerse_MINI_Vision_Only':
109
+ return True
110
+ else:
111
+ return False
112
+
113
+ def build_prompt(self, line, dataset: str) -> list[dict[str, str]]:
114
+
115
+ if dataset in {'MathVerse_MINI_Vision_Only'}:
116
+ return self. _build_mathVerse_mini_vision_only_prompt(line, dataset)
117
+ raise ValueError(f'Unsupported dataset: {dataset}')
118
+
119
+ def _build_mathVerse_mini_vision_only_prompt(self, line, dataset=None):
120
+ assert self.use_custom_prompt(dataset)
121
+ assert dataset is None or isinstance(dataset, str)
122
+
123
+ tgt_path = self.dump_image(line, dataset)
124
+
125
+ question = line['question']
126
+
127
+ # remove 'directly' from the prompt, so the model will answer the question in Chain-of-Thought (CoT) manner
128
+ prompt = question.replace('directly','',1)
129
+
130
+ msgs = []
131
+ if isinstance(tgt_path, list):
132
+ msgs.extend([dict(type='image', value=p) for p in tgt_path])
133
+ else:
134
+ msgs = [dict(type='image', value=tgt_path)]
135
+ msgs.append(dict(type='text', value=prompt))
136
+ return msgs
137
+
138
+ # inputs can be a lvl-2 nested list: [content1, content2, content3, ...]
139
+ # content can be a string or a list of image & text
140
+ def prepare_itlist(self, inputs):
141
+ assert np.all([isinstance(x, dict) for x in inputs])
142
+ has_images = np.sum([x['type'] == 'image' for x in inputs])
143
+ if has_images:
144
+ content_list = []
145
+ for msg in inputs:
146
+ if msg['type'] == 'text':
147
+ content_list.append(dict(type='text', text=msg['value']))
148
+ elif msg['type'] == 'image':
149
+ from PIL import Image
150
+ img = Image.open(msg['value'])
151
+ b64 = encode_image_to_base64(img)
152
+ img_struct = dict(url=f'data:image/jpeg;base64,{b64}')
153
+ content_list.append(dict(type='image_url', image_url=img_struct))
154
+ else:
155
+ assert all([x['type'] == 'text' for x in inputs])
156
+ text = '\n'.join([x['value'] for x in inputs])
157
+ content_list = [dict(type='text', text=text)]
158
+ return content_list
159
+
160
+ def prepare_inputs(self, inputs):
161
+ input_msgs = []
162
+ if self.system_prompt is not None:
163
+ input_msgs.append(dict(role='system', content=self.system_prompt))
164
+ assert isinstance(inputs, list) and isinstance(inputs[0], dict)
165
+ assert np.all(['type' in x for x in inputs]) or np.all(['role' in x for x in inputs]), inputs
166
+ if 'role' in inputs[0]:
167
+ assert inputs[-1]['role'] == 'user', inputs[-1]
168
+ for item in inputs:
169
+ input_msgs.append(dict(role=item['role'], content=self.prepare_itlist(item['content'])))
170
+ else:
171
+ input_msgs.append(dict(role='user', content=self.prepare_itlist(inputs)))
172
+ return input_msgs
173
+
174
+ def generate_inner(self, inputs, **kwargs) -> str:
175
+
176
+ input_msgs = self.prepare_inputs(inputs)
177
+ temperature = kwargs.pop('temperature', self.temperature)
178
+ max_tokens = kwargs.pop('max_tokens', self.max_tokens)
179
+
180
+ ret_code = -1
181
+ answer = self.fail_msg
182
+ response = None
183
+ payload = dict(model=self.endpoint, messages=input_msgs, max_tokens=max_tokens, temperature=temperature)
184
+ try:
185
+ response = self.client.chat.completions.create(**payload)
186
+ answer = response.choices[0].message.content.strip()
187
+ ret_code = 0
188
+ except Exception as err:
189
+ self.logger.error(f'{type(err)}: {err}')
190
+ self.logger.error(response.text if hasattr(response, 'text') else response)
191
+
192
+ return ret_code, answer, response
193
+
194
+
195
+ class DoubaoVL(DoubaoVLWrapper):
196
+
197
+ def generate(self, message, dataset=None):
198
+ return super(DoubaoVL, self).generate(message)
199
+
200
+
201
+ if __name__ == '__main__':
202
+ # export DOUBAO_VL_KEY=''
203
+ # export DOUBAO_VL_ENDPOINT=''
204
+ model = DoubaoVLWrapper(verbose=True)
205
+ inputs = [
206
+ {'type': 'image', 'value': './assets/apple.jpg'},
207
+ {'type': 'text', 'value': '请详细描述一下这张图片。'},
208
+ ]
209
+ code, answer, resp = model.generate_inner(inputs)
210
+ print(code, answer, resp)
VLMEvalKit-sudoku/vlmeval/api/gemini.py ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from vlmeval.smp import *
2
+ from vlmeval.api.base import BaseAPI
3
+
4
+ headers = 'Content-Type: application/json'
5
+
6
+
7
+ class GeminiWrapper(BaseAPI):
8
+
9
+ is_api: bool = True
10
+
11
+ def __init__(self,
12
+ model: str = 'gemini-1.0-pro',
13
+ retry: int = 5,
14
+ key: str = None,
15
+ verbose: bool = True,
16
+ temperature: float = 0.0,
17
+ system_prompt: str = None,
18
+ max_tokens: int = 2048,
19
+ proxy: str = None,
20
+ backend='genai',
21
+ project_id='vlmeval',
22
+ thinking_budget: int = None, # range from 0 to 24576
23
+ # see https://ai.google.dev/gemini-api/docs/thinking
24
+ fps: int = 1,
25
+ media_resolution: str = None,
26
+ **kwargs):
27
+
28
+ self.model = model
29
+ self.fail_msg = 'Failed to obtain answer via API. '
30
+ self.max_tokens = max_tokens
31
+ self.temperature = temperature
32
+ self.thinking_budget = thinking_budget
33
+ self.fps = fps
34
+ # for image, high and medium resolution is 258 tokens per image [default], low resolution is 66 tokens per image
35
+ # for video, not support high resolution, medium resolution is 258 tokens per image [default], low resolution is 66 tokens per image # noqa: E501
36
+ self.media_resolution = media_resolution
37
+ if self.media_resolution:
38
+ assert self.media_resolution in ['low', 'medium', 'high']
39
+ if key is None:
40
+ key = os.environ.get('GOOGLE_API_KEY', None)
41
+ # Try to load backend from environment variable
42
+ be = os.environ.get('GOOGLE_API_BACKEND', None)
43
+ if be is not None and be in ['genai', 'vertex']:
44
+ backend = be
45
+
46
+ assert backend in ['genai', 'vertex']
47
+ if backend == 'genai':
48
+ # We have not evaluated Gemini-1.5 w. GenAI backend
49
+ assert key is not None # Vertex does not require API Key
50
+ try:
51
+ from google import genai
52
+ from google.genai import types
53
+ except ImportError as e:
54
+ raise ImportError(
55
+ "Could not import 'google.genai'. Please install it with:\n"
56
+ " pip install --upgrade google-genai"
57
+ ) from e
58
+ self.media_resolution_dict = {
59
+ 'low': types.MediaResolution.MEDIA_RESOLUTION_LOW,
60
+ 'medium': types.MediaResolution.MEDIA_RESOLUTION_MEDIUM,
61
+ 'high': types.MediaResolution.MEDIA_RESOLUTION_HIGH
62
+ }
63
+ self.genai = genai
64
+ self.client = genai.Client(api_key=key)
65
+
66
+ self.backend = backend
67
+ self.project_id = project_id
68
+ self.api_key = key
69
+
70
+ if proxy is not None:
71
+ proxy_set(proxy)
72
+ super().__init__(retry=retry, system_prompt=system_prompt, verbose=verbose, **kwargs)
73
+
74
+ def upload_video_genai(self, video_path):
75
+ from google import genai
76
+ from google.genai import types
77
+ myfile = self.client.files.upload(file=video_path)
78
+
79
+ video_part = types.Part.from_uri(
80
+ file_uri=myfile.uri,
81
+ mime_type="video/mp4"
82
+ )
83
+
84
+ video_part.video_metadata = types.VideoMetadata(fps=self.fps)
85
+
86
+ while True:
87
+ myfile = self.client.files.get(name=myfile.name)
88
+ if myfile.state == "ACTIVE":
89
+ break
90
+ time.sleep(2)
91
+
92
+ return video_part
93
+
94
+ def build_msgs_genai(self, inputs):
95
+ video_in_msg = False
96
+ video_parts = []
97
+ text_and_images = [] if self.system_prompt is None else [self.system_prompt]
98
+
99
+ for inp in inputs:
100
+ if inp['type'] == 'text':
101
+ text_and_images.append(inp['value'])
102
+ elif inp['type'] == 'image':
103
+ text_and_images.append(Image.open(inp['value']))
104
+ elif inp['type'] == 'video':
105
+ video_file = self.upload_video_genai(inp['value'])
106
+ video_parts.append(video_file)
107
+ video_in_msg = True
108
+
109
+ messages = video_parts + text_and_images
110
+ return messages, video_in_msg
111
+
112
+ def build_msgs_vertex(self, inputs):
113
+ from vertexai.generative_models import Part, Image
114
+ messages = [] if self.system_prompt is None else [self.system_prompt]
115
+ for inp in inputs:
116
+ if inp['type'] == 'text':
117
+ messages.append(inp['value'])
118
+ elif inp['type'] == 'image':
119
+ messages.append(Part.from_image(Image.load_from_file(inp['value'])))
120
+ return messages
121
+
122
+ def generate_inner(self, inputs, **kwargs) -> str:
123
+ if self.backend == 'genai':
124
+ from google.genai import types
125
+ assert isinstance(inputs, list)
126
+ model = self.model
127
+ messages, video_in_msg = self.build_msgs_genai(inputs)
128
+
129
+ # Configure generation parameters
130
+ config_args = {
131
+ "temperature": self.temperature,
132
+ "max_output_tokens": self.max_tokens
133
+ }
134
+ # set resolution for vision input
135
+ if self.media_resolution:
136
+ if video_in_msg:
137
+ assert self.media_resolution != 'high', "For video input, only support medium and low resolution"
138
+ config_args["media_resolution"] = self.media_resolution_dict[self.media_resolution]
139
+
140
+ # If thinking_budget is specified, add thinking_config
141
+ # By default, Gemini 2.5 Pro will automatically select
142
+ # a thinking budget not exceeding 8192 if not specified.
143
+ if self.thinking_budget is not None:
144
+ config_args["thinking_config"] = types.ThinkingConfig(
145
+ thinking_budget=self.thinking_budget
146
+ )
147
+ config_args.update(kwargs)
148
+
149
+ try:
150
+ resp = self.client.models.generate_content(
151
+ model=model,
152
+ contents=messages,
153
+ config=types.GenerateContentConfig(**config_args)
154
+ )
155
+ answer = resp.text
156
+ return 0, answer, 'Succeeded! '
157
+ except Exception as err:
158
+ if self.verbose:
159
+ self.logger.error(f'{type(err)}: {err}')
160
+ self.logger.error(f'The input messages are {inputs}.')
161
+
162
+ return -1, '', ''
163
+ elif self.backend == 'vertex':
164
+ import vertexai
165
+ from vertexai.generative_models import GenerativeModel
166
+ vertexai.init(project=self.project_id, location='us-central1')
167
+ model_name = 'gemini-1.0-pro-vision' if self.model == 'gemini-1.0-pro' else self.model
168
+ model = GenerativeModel(model_name=model_name)
169
+ messages = self.build_msgs_vertex(inputs)
170
+ try:
171
+ resp = model.generate_content(messages)
172
+ answer = resp.text
173
+ return 0, answer, 'Succeeded! '
174
+ except Exception as err:
175
+ if self.verbose:
176
+ self.logger.error(f'{type(err)}: {err}')
177
+ self.logger.error(f'The input messages are {inputs}.')
178
+
179
+ return -1, '', ''
180
+
181
+
182
+ class Gemini(GeminiWrapper):
183
+ VIDEO_LLM = True
184
+
185
+ def generate(self, message, dataset=None):
186
+ return super(Gemini, self).generate(message)
VLMEvalKit-sudoku/vlmeval/api/taiyi.py ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from vlmeval.smp import *
2
+ from vlmeval.api.base import BaseAPI
3
+ from vlmeval.dataset import DATASET_TYPE, img_root_map
4
+
5
+
6
+ class TaiyiWrapper(BaseAPI):
7
+
8
+ is_api: bool = True
9
+
10
+ def __init__(self,
11
+ model: str = 'taiyi',
12
+ retry: int = 5,
13
+ key: str = None,
14
+ verbose: bool = False,
15
+ system_prompt: str = None,
16
+ temperature: float = 0,
17
+ timeout: int = 60,
18
+ url: str = "https://taiyi.megvii.com/v1/chat/completions",
19
+ max_tokens: int = 1024,
20
+ **kwargs):
21
+
22
+ self.model = model
23
+ self.fail_msg = 'Failed to obtain answer via API. '
24
+ self.max_tokens = max_tokens
25
+ self.temperature = temperature
26
+
27
+ if key is None:
28
+ key = os.environ.get('TAIYI_API_KEY', None)
29
+ assert key is not None, ('Please set the API Key ')
30
+ self.key = key
31
+
32
+ self.timeout = timeout
33
+ super().__init__(retry=retry, system_prompt=system_prompt, verbose=verbose, **kwargs)
34
+ assert url is not None, ('Please set the url ')
35
+ self.url = url
36
+ self.logger.info(f'Using url: {self.url}; API Key: {self.key}')
37
+
38
+ def use_custom_prompt(self, dataset):
39
+ if DATASET_TYPE(dataset) == 'Y/N' or DATASET_TYPE(dataset) == 'MCQ' or DATASET_TYPE(dataset) == 'VQA':
40
+ return True
41
+ return False
42
+
43
+ def prepare_inputs(self, inputs):
44
+ input_msgs = []
45
+ if self.system_prompt is not None:
46
+ input_msgs.append(dict(role='system', content=self.system_prompt))
47
+ has_images = np.sum([x['type'] == 'image' for x in inputs])
48
+ if has_images:
49
+ content_list = []
50
+ for msg in inputs:
51
+ if msg['type'] == 'text':
52
+ content_list.append(dict(type='text', text=msg['value']))
53
+ elif msg['type'] == 'image':
54
+ imgbytes = open(msg['value'],'rb').read()
55
+ b64 = base64.b64encode(imgbytes).decode('ascii')
56
+ img_struct = dict(url=f'data:image/jpeg;base64,{b64}')
57
+ content_list.append(dict(type='image_url', image_url=img_struct))
58
+ input_msgs.append(dict(role='user', content=content_list))
59
+ else:
60
+ assert all([x['type'] == 'text' for x in inputs])
61
+ text = '\n'.join([x['value'] for x in inputs])
62
+ input_msgs.append(dict(role='user', content=text))
63
+ return input_msgs
64
+
65
+ def image_first(self, msgs):
66
+ nr_img = 0
67
+ for s in msgs:
68
+ if s['type'] == 'image':
69
+ nr_img += 1
70
+
71
+ if nr_img == 1:
72
+ new_msgs = []
73
+ img_msg = None
74
+ for s in msgs:
75
+ if s['type'] == 'text':
76
+ new_msgs.append(s)
77
+ else:
78
+ img_msg = s
79
+ new_msgs.insert(0, img_msg)
80
+ else:
81
+ new_msgs = msgs
82
+
83
+ return new_msgs
84
+
85
+ def build_multi_choice_prompt(self, line, dataset=None):
86
+ question = line['question']
87
+ hint = line['hint'] if ('hint' in line and not pd.isna(line['hint'])) else None
88
+ if hint is not None:
89
+ question = hint + '\n' + question
90
+
91
+ options = {
92
+ cand: line[cand]
93
+ for cand in string.ascii_uppercase
94
+ if cand in line and not pd.isna(line[cand])
95
+ }
96
+ for key, item in options.items():
97
+ question += f'\n{key}. {item}'
98
+ prompt = question
99
+
100
+ if len(options):
101
+ prompt += '\n请直接回答选项字母。' if cn_string(
102
+ prompt) else "\nAnswer with the option's letter from the given choices directly."
103
+ else:
104
+ prompt += '\n请直接回答问题。' if cn_string(prompt) else '\nAnswer the question directly.'
105
+
106
+ return prompt
107
+
108
+ def build_yorn_prompt(self, line, dataset=None):
109
+ if listinstr(['HallusionBench'], dataset):
110
+ pre_prompt = 'Read the following question carefully, think and solve it step by step.\n\n'
111
+ else:
112
+ pre_prompt = ''
113
+
114
+ prompt = pre_prompt + line['question'] + ' Please answer yes or no as the final answer.'
115
+
116
+ return prompt
117
+
118
+ def build_vqa_prompt(self, line, dataset=None):
119
+ if listinstr(['OCRBench'], dataset):
120
+ pre_prompt = 'Carefully identify the text in the image and answer the question.\n\n'
121
+ else:
122
+ pre_prompt = ''
123
+
124
+ if listinstr(['MMVet'], dataset):
125
+ post_prompt = '\nAnswer this question in detail.'
126
+ else:
127
+ post_prompt = ''
128
+
129
+ prompt = pre_prompt + line['question'] + post_prompt
130
+
131
+ return prompt
132
+
133
+ def build_prompt(self, line, dataset=None):
134
+ assert self.use_custom_prompt(dataset)
135
+ assert dataset is None or isinstance(dataset, str)
136
+ tgt_path = self.dump_image(line, dataset)
137
+
138
+ if DATASET_TYPE(dataset) == 'MCQ':
139
+ prompt = self.build_multi_choice_prompt(line, dataset)
140
+ elif DATASET_TYPE(dataset) == 'Y/N':
141
+ prompt = self.build_yorn_prompt(line, dataset)
142
+ elif DATASET_TYPE(dataset) == 'VQA':
143
+ prompt = self.build_vqa_prompt(line, dataset)
144
+ else:
145
+ raise RuntimeError(f'Invalid dataset type: {DATASET_TYPE(dataset)}')
146
+ message = []
147
+ message.extend([dict(type='image', value=s) for s in tgt_path])
148
+ message.extend([dict(type='text', value=prompt)])
149
+
150
+ # interleave dataset
151
+ if dataset.startswith('MMMU_'):
152
+ from .. import MMMUDataset
153
+ message = MMMUDataset.split_MMMU(message)
154
+ message = self.image_first(message)
155
+
156
+ return message
157
+
158
+ def generate_inner(self, inputs, **kwargs) -> str:
159
+
160
+ input_msgs = self.prepare_inputs(inputs)
161
+ temperature = kwargs.pop('temperature', self.temperature)
162
+
163
+ headers = {'Authorization': f'Bearer {self.key}'}
164
+ payload = dict(
165
+ model=self.model,
166
+ messages=input_msgs,
167
+ n=1,
168
+ temperature=temperature,
169
+ **kwargs)
170
+ response = requests.post(self.url, headers=headers, data=json.dumps(payload), timeout=self.timeout * 1.1)
171
+ ret_code = response.status_code
172
+ ret_code = 0 if (200 <= int(ret_code) < 300) else ret_code
173
+ answer = self.fail_msg
174
+ try:
175
+ resp_struct = json.loads(response.text)
176
+ answer = resp_struct['choices'][0]['message']['content'].strip()
177
+ except:
178
+ pass
179
+ return ret_code, answer, response
180
+
181
+
182
+ class TaiyiAPI(TaiyiWrapper):
183
+
184
+ def generate(self, message, dataset=None):
185
+ return super(TaiyiAPI, self).generate(message)
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (12.4 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/charxiv.cpython-310.pyc ADDED
Binary file (7.04 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/creation.cpython-310.pyc ADDED
Binary file (25.9 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/emma.cpython-310.pyc ADDED
Binary file (2.12 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/gobench.cpython-310.pyc ADDED
Binary file (6.67 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_caption.cpython-310.pyc ADDED
Binary file (3.02 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_mt.cpython-310.pyc ADDED
Binary file (4.69 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_shortqa.cpython-310.pyc ADDED
Binary file (6.48 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_vqa.cpython-310.pyc ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b92f2f6da815318da570882e4402f22e8d79aef18c2f4c8278bc02c14e8af70
3
+ size 107742
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/image_yorn.cpython-310.pyc ADDED
Binary file (4.34 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/longvideobench.cpython-310.pyc ADDED
Binary file (10.8 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/mlvu.cpython-310.pyc ADDED
Binary file (14.5 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/mmalignbench.cpython-310.pyc ADDED
Binary file (10.7 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/mmbench_video.cpython-310.pyc ADDED
Binary file (10.3 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/mmmath.cpython-310.pyc ADDED
Binary file (10.4 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/moviechat1k.cpython-310.pyc ADDED
Binary file (9.27 kB). View file
 
VLMEvalKit-sudoku/vlmeval/dataset/__pycache__/slidevqa.cpython-310.pyc ADDED
Binary file (6.75 kB). View file