yuccaaa commited on
Commit
a58504d
·
verified ·
1 Parent(s): d6c3698

Upload ms-swift/examples/notebook/qwen2_5-vl-grounding/zh.ipynb with huggingface_hub

Browse files
ms-swift/examples/notebook/qwen2_5-vl-grounding/zh.ipynb ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## Qwen2.5-VL Grounding任务\n",
8
+ "\n",
9
+ "这里介绍使用qwen2.5-vl进行grounding任务的全流程介绍。当然,你也可以使用internvl2.5或者qwen2-vl等多模态模型。\n",
10
+ "\n",
11
+ "我们使用[AI-ModelScope/coco](https://modelscope.cn/datasets/AI-ModelScope/coco)数据集来展示整个流程。\n",
12
+ "\n",
13
+ "如果需要使用自定义数据集,需要符合以下格式:"
14
+ ]
15
+ },
16
+ {
17
+ "cell_type": "code",
18
+ "execution_count": null,
19
+ "metadata": {},
20
+ "outputs": [],
21
+ "source": [
22
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are a helpful assistant.\"}, {\"role\": \"user\", \"content\": \"<image>描述图像\"}, {\"role\": \"assistant\", \"content\": \"<ref-object><bbox>和<ref-object><bbox>正在沙滩上玩耍\"}], \"images\": [\"/xxx/x.jpg\"], \"objects\": {\"ref\": [\"一只狗\", \"一个女人\"], \"bbox\": [[331.5, 761.4, 853.5, 1594.8], [676.5, 685.8, 1099.5, 1427.4]]}}\n",
23
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are a helpful assistant.\"}, {\"role\": \"user\", \"content\": \"<image>找到图像中的<ref-object>\"}, {\"role\": \"assistant\", \"content\": \"<bbox><bbox>\"}], \"images\": [\"/xxx/x.jpg\"], \"objects\": {\"ref\": [\"羊\"], \"bbox\": [[90.9, 160.8, 135, 212.8], [360.9, 480.8, 495, 532.8]]}}\n",
24
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are a helpful assistant.\"}, {\"role\": \"user\", \"content\": \"<image>帮我打开谷歌浏览器\"}, {\"role\": \"assistant\", \"content\": \"Action: click(start_box='<bbox>')\"}], \"images\": [\"/xxx/x.jpg\"], \"objects\": {\"ref\": [], \"bbox\": [[615, 226]]}}"
25
+ ]
26
+ },
27
+ {
28
+ "cell_type": "markdown",
29
+ "metadata": {},
30
+ "source": [
31
+ "ms-swift在预处理数据集时,会使用模型特有的grounding任务格式,将objects中的ref填充`<ref-object>`,bbox会根据模型类型选择是否进行0-1000的归一化,并填充`<bbox>`。例如:qwen2-vl为`f'<|object_ref_start|>羊<|object_ref_end|>'`和`f'<|box_start|>(101,201),(150,266)<|box_end|>'`(qwen2.5-vl不进行归一化,只将float型转成int型),internvl2.5则为`f'<ref>羊</ref>'`和`f'<box>[[101, 201, 150, 266]]</box>'`等。\n",
32
+ "\n",
33
+ "\n",
34
+ "训练之前,你需要从main分支安装ms-swift:"
35
+ ]
36
+ },
37
+ {
38
+ "cell_type": "code",
39
+ "execution_count": null,
40
+ "metadata": {
41
+ "vscode": {
42
+ "languageId": "shellscript"
43
+ }
44
+ },
45
+ "outputs": [],
46
+ "source": [
47
+ "# pip install git+https://github.com/modelscope/ms-swift.git\n",
48
+ "\n",
49
+ "git clone https://github.com/modelscope/ms-swift.git\n",
50
+ "cd ms-swift\n",
51
+ "pip install -e .\n",
52
+ "\n",
53
+ "# 如果'transformers>=4.49'已经发版,则无需从main分支安装\n",
54
+ "pip install git+https://github.com/huggingface/transformers.git"
55
+ ]
56
+ },
57
+ {
58
+ "cell_type": "markdown",
59
+ "metadata": {},
60
+ "source": [
61
+ "然后,使用以下shell进行训练。MAX_PIXELS的参数含义可以查看[这里](https://swift.readthedocs.io/en/latest/Instruction/Command-line-parameters.html#specific-model-arguments)\n",
62
+ "\n",
63
+ "### 训练\n",
64
+ "\n",
65
+ "单卡训练:"
66
+ ]
67
+ },
68
+ {
69
+ "cell_type": "code",
70
+ "execution_count": null,
71
+ "metadata": {
72
+ "vscode": {
73
+ "languageId": "shellscript"
74
+ }
75
+ },
76
+ "outputs": [],
77
+ "source": [
78
+ "# 显存资源:24GiB\n",
79
+ "CUDA_VISIBLE_DEVICES=0 \\\n",
80
+ "MAX_PIXELS=1003520 \\\n",
81
+ "swift sft \\\n",
82
+ " --model Qwen/Qwen2.5-VL-7B-Instruct \\\n",
83
+ " --dataset 'AI-ModelScope/coco#2000' \\\n",
84
+ " --train_type lora \\\n",
85
+ " --torch_dtype bfloat16 \\\n",
86
+ " --num_train_epochs 1 \\\n",
87
+ " --per_device_train_batch_size 1 \\\n",
88
+ " --per_device_eval_batch_size 1 \\\n",
89
+ " --learning_rate 1e-4 \\\n",
90
+ " --lora_rank 8 \\\n",
91
+ " --lora_alpha 32 \\\n",
92
+ " --target_modules all-linear \\\n",
93
+ " --freeze_vit true \\\n",
94
+ " --gradient_accumulation_steps 16 \\\n",
95
+ " --eval_steps 100 \\\n",
96
+ " --save_steps 100 \\\n",
97
+ " --save_total_limit 5 \\\n",
98
+ " --logging_steps 5 \\\n",
99
+ " --max_length 2048 \\\n",
100
+ " --output_dir output \\\n",
101
+ " --warmup_ratio 0.05 \\\n",
102
+ " --dataloader_num_workers 4 \\\n",
103
+ " --dataset_num_proc 4"
104
+ ]
105
+ },
106
+ {
107
+ "cell_type": "markdown",
108
+ "metadata": {},
109
+ "source": [
110
+ "然后我们将训练的模型推送到ModelScope:"
111
+ ]
112
+ },
113
+ {
114
+ "cell_type": "code",
115
+ "execution_count": null,
116
+ "metadata": {
117
+ "vscode": {
118
+ "languageId": "shellscript"
119
+ }
120
+ },
121
+ "outputs": [],
122
+ "source": [
123
+ "swift export \\\n",
124
+ " --adapters output/vx-xxx/checkpoint-xxx \\\n",
125
+ " --push_to_hub true \\\n",
126
+ " --hub_model_id '<model-id>' \\\n",
127
+ " --hub_token '<sdk-token>' \\\n",
128
+ " --use_hf false"
129
+ ]
130
+ },
131
+ {
132
+ "cell_type": "markdown",
133
+ "metadata": {},
134
+ "source": [
135
+ "我们将训练的checkpoint推送到[swift/test_grounding](https://modelscope.cn/models/swift/test_grounding)。\n",
136
+ "\n",
137
+ "### 推理\n",
138
+ "\n",
139
+ "训练完成后,我们使用以下命令对训练时的验证集进行推理。这里`--adapters`需要替换成训练生成的last checkpoint文件夹。由于adapters文件夹中包含了训练的参数文件,因此不需要额外指定`--model`。\n",
140
+ "\n",
141
+ "若模型采用的是绝对坐标的方式进行输出,推理时请提前对图像进行缩放而不使用`MAX_PIXELS`或者`--max_pixels`。若是千分位坐标,则没有此约束。\n",
142
+ "\n",
143
+ "由于我们已经将训练后的checkpoint推送到了ModelScope上,以下推理脚本可以直接运行:"
144
+ ]
145
+ },
146
+ {
147
+ "cell_type": "code",
148
+ "execution_count": null,
149
+ "metadata": {
150
+ "vscode": {
151
+ "languageId": "shellscript"
152
+ }
153
+ },
154
+ "outputs": [],
155
+ "source": [
156
+ "CUDA_VISIBLE_DEVICES=0 \\\n",
157
+ "swift infer \\\n",
158
+ " --adapters swift/test_grounding \\\n",
159
+ " --stream true \\\n",
160
+ " --load_data_args true \\\n",
161
+ " --max_new_tokens 512 \\\n",
162
+ " --dataset_num_proc 4"
163
+ ]
164
+ },
165
+ {
166
+ "cell_type": "markdown",
167
+ "metadata": {},
168
+ "source": [
169
+ "我们也可以使用代码的方式进行推理:\n",
170
+ "\n",
171
+ "单样本推理的例子可以查看[这里](https://github.com/modelscope/ms-swift/blob/main/examples/infer/demo_grounding.py)。"
172
+ ]
173
+ },
174
+ {
175
+ "cell_type": "code",
176
+ "execution_count": 1,
177
+ "metadata": {},
178
+ "outputs": [],
179
+ "source": [
180
+ "import os\n",
181
+ "os.environ['CUDA_VISIBLE_DEVICES'] = '0'\n",
182
+ "\n",
183
+ "import re\n",
184
+ "from typing import Literal\n",
185
+ "from swift.llm import (\n",
186
+ " PtEngine, RequestConfig, BaseArguments, InferRequest, safe_snapshot_download, draw_bbox, load_image, load_dataset, InferEngine\n",
187
+ ")\n",
188
+ "from IPython.display import display\n",
189
+ "\n",
190
+ "def infer_stream(engine: InferEngine, infer_request: InferRequest):\n",
191
+ " request_config = RequestConfig(max_tokens=512, temperature=0, stream=True)\n",
192
+ " gen_list = engine.infer([infer_request], request_config)\n",
193
+ " query = infer_request.messages[0]['content']\n",
194
+ " print(f'query: {query}\\nresponse: ', end='')\n",
195
+ " response = ''\n",
196
+ " for resp in gen_list[0]:\n",
197
+ " if resp is None:\n",
198
+ " continue\n",
199
+ " delta = resp.choices[0].delta.content\n",
200
+ " response += delta\n",
201
+ " print(delta, end='', flush=True)\n",
202
+ " print()\n",
203
+ " return response\n",
204
+ "\n",
205
+ "def draw_bbox_qwen2_vl(image, response, norm_bbox: Literal['norm1000', 'none']):\n",
206
+ " matches = re.findall(\n",
207
+ " r'<\\|object_ref_start\\|>(.*?)<\\|object_ref_end\\|><\\|box_start\\|>\\((\\d+),(\\d+)\\),\\((\\d+),(\\d+)\\)<\\|box_end\\|>',\n",
208
+ " response)\n",
209
+ " ref = []\n",
210
+ " bbox = []\n",
211
+ " for match_ in matches:\n",
212
+ " ref.append(match_[0])\n",
213
+ " bbox.append(list(match_[1:]))\n",
214
+ " draw_bbox(image, ref, bbox, norm_bbox=norm_bbox)\n",
215
+ "\n",
216
+ "# 下载权重,并加载模型\n",
217
+ "output_dir = 'images_bbox'\n",
218
+ "model_id_or_path = 'swift/test_grounding'\n",
219
+ "output_dir = os.path.abspath(os.path.expanduser(output_dir))\n",
220
+ "adapter_path = safe_snapshot_download(model_id_or_path)\n",
221
+ "args = BaseArguments.from_pretrained(adapter_path)\n",
222
+ "engine = PtEngine(args.model, adapters=[adapter_path])\n",
223
+ "\n",
224
+ "# 获取验证集并推理\n",
225
+ "_, val_dataset = load_dataset(args.dataset, split_dataset_ratio=args.split_dataset_ratio, num_proc=4, seed=args.seed)\n",
226
+ "print(f'output_dir: {output_dir}')\n",
227
+ "os.makedirs(output_dir, exist_ok=True)\n",
228
+ "for i, data in enumerate(val_dataset):\n",
229
+ " image = data['images'][0]\n",
230
+ " image = load_image(image['bytes'] or image['path'])\n",
231
+ " display(image)\n",
232
+ " response = infer_stream(engine, InferRequest(**data))\n",
233
+ " draw_bbox_qwen2_vl(image, response, norm_bbox=args.norm_bbox)\n",
234
+ " print('-' * 50)\n",
235
+ " image.save(os.path.join(output_dir, f'{i}.png'))\n",
236
+ " display(image)"
237
+ ]
238
+ }
239
+ ],
240
+ "metadata": {
241
+ "kernelspec": {
242
+ "display_name": "test_py310",
243
+ "language": "python",
244
+ "name": "python3"
245
+ },
246
+ "language_info": {
247
+ "codemirror_mode": {
248
+ "name": "ipython",
249
+ "version": 3
250
+ },
251
+ "file_extension": ".py",
252
+ "mimetype": "text/x-python",
253
+ "name": "python",
254
+ "nbconvert_exporter": "python",
255
+ "pygments_lexer": "ipython3",
256
+ "version": "3.11.10"
257
+ }
258
+ },
259
+ "nbformat": 4,
260
+ "nbformat_minor": 2
261
+ }