File size: 12,701 Bytes
f677bb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
# Evaluation

SWIFT supports eval (evaluation) capabilities to provide standardized evaluation metrics for both raw models and trained models.

## Capability Introduction

SWIFT's eval capability utilizes the EvalScope evaluation framework from the Magic Tower community, which has been advanced in its encapsulation to support the evaluation needs of various models.

> Note: EvalScope supports many other complex capabilities, such as [model performance evaluation](https://evalscope.readthedocs.io/en/latest/user_guides/stress_test/quick_start.html), so please use the EvalScope framework directly.

Currently, we support the evaluation process of **standard evaluation datasets** as well as the evaluation process of **user-defined** evaluation datasets. The **standard evaluation datasets** are supported by three evaluation backends:

Below are the names of the supported datasets. For detailed information on the datasets, please refer to [all supported datasets](https://evalscope.readthedocs.io/en/latest/get_started/supported_dataset.html).

1. Native (default):

    Primarily supports pure text evaluation, while **supporting** visualization of evaluation results.
    ```text
    'arc', 'bbh', 'ceval', 'cmmlu', 'competition_math',
    'general_qa', 'gpqa', 'gsm8k', 'hellaswag', 'humaneval',
    'ifeval', 'iquiz', 'mmlu', 'mmlu_pro',
    'race', 'trivia_qa', 'truthful_qa'
    ```

2. OpenCompass:

    Primarily supports pure text evaluation, currently **does not support** visualization of evaluation results.
    ```text
    'obqa', 'cmb', 'AX_b', 'siqa', 'nq', 'mbpp', 'winogrande', 'mmlu', 'BoolQ', 'cluewsc', 'ocnli', 'lambada',
    'CMRC', 'ceval', 'csl', 'cmnli', 'bbh', 'ReCoRD', 'math', 'humaneval', 'eprstmt', 'WSC', 'storycloze',
    'MultiRC', 'RTE', 'chid', 'gsm8k', 'AX_g', 'bustm', 'afqmc', 'piqa', 'lcsts', 'strategyqa', 'Xsum', 'agieval',
    'ocnli_fc', 'C3', 'tnews', 'race', 'triviaqa', 'CB', 'WiC', 'hellaswag', 'summedits', 'GaokaoBench',
    'ARC_e', 'COPA', 'ARC_c', 'DRCD'
    ```

3. VLMEvalKit:

    Primarily supports multimodal evaluation and currently **does not support** visualization of evaluation results.
    ```text
    'COCO_VAL', 'MME', 'HallusionBench', 'POPE', 'MMBench_DEV_EN', 'MMBench_TEST_EN', 'MMBench_DEV_CN', 'MMBench_TEST_CN',
    'MMBench', 'MMBench_CN', 'MMBench_DEV_EN_V11', 'MMBench_TEST_EN_V11', 'MMBench_DEV_CN_V11',
    'MMBench_TEST_CN_V11', 'MMBench_V11', 'MMBench_CN_V11', 'SEEDBench_IMG', 'SEEDBench2',
    'SEEDBench2_Plus', 'ScienceQA_VAL', 'ScienceQA_TEST', 'MMT-Bench_ALL_MI', 'MMT-Bench_ALL',
    'MMT-Bench_VAL_MI', 'MMT-Bench_VAL', 'AesBench_VAL', 'AesBench_TEST', 'CCBench', 'AI2D_TEST', 'MMStar',
    'RealWorldQA', 'MLLMGuard_DS', 'BLINK', 'OCRVQA_TEST', 'OCRVQA_TESTCORE', 'TextVQA_VAL', 'DocVQA_VAL',
    'DocVQA_TEST', 'InfoVQA_VAL', 'InfoVQA_TEST', 'ChartQA_TEST', 'MathVision', 'MathVision_MINI',
    'MMMU_DEV_VAL', 'MMMU_TEST', 'OCRBench', 'MathVista_MINI', 'LLaVABench', 'MMVet', 'MTVQA_TEST',
    'MMLongBench_DOC', 'VCR_EN_EASY_500', 'VCR_EN_EASY_100', 'VCR_EN_EASY_ALL', 'VCR_EN_HARD_500',
    'VCR_EN_HARD_100', 'VCR_EN_HARD_ALL', 'VCR_ZH_EASY_500', 'VCR_ZH_EASY_100', 'VCR_ZH_EASY_ALL',
    'VCR_ZH_HARD_500', 'VCR_ZH_HARD_100', 'VCR_ZH_HARD_ALL', 'MMDU', 'MMBench-Video', 'Video-MME'
    ```

## Environment Preparation

```shell
pip install ms-swift[eval] -U
```

Or install from source:

```shell
git clone https://github.com/modelscope/ms-swift.git
cd ms-swift
pip install -e '.[eval]'
```

## Evaluation

Supports four methods of evaluation: pure text evaluation, multimodal evaluation, URL evaluation, and custom dataset evaluation.

**Basic Example**

```shell
CUDA_VISIBLE_DEVICES=0 \
swift eval \
    --model Qwen/Qwen2.5-0.5B-Instruct \
    --eval_backend Native \
    --infer_backend pt \
    --eval_limit 10 \
    --eval_dataset gsm8k
```
Where:
- model: Can specify a local model path or a model ID on modelscope
- eval_backend: Options are Native, OpenCompass, VLMEvalKit; default is Native
- infer_backend: Options are pt, vllm, lmdeploy; default is pt
- eval_limit: Sample size for each evaluation set; default is None, which means using all data; can be used for quick validation
- eval_dataset: Evaluation dataset(s); multiple datasets can be set, separated by spaces

For a specific list of evaluation parameters, please refer to [here](./Command-line-parameters.md#evaluation-arguments).

## Evaluation During Training

SWIFT supports using EvalScope to evaluate the current model during the training process, allowing for timely understanding of the model's training effectiveness.

**Basic Example**

```shell
CUDA_VISIBLE_DEVICES=0 \
swift sft \
  --model "Qwen/Qwen2.5-0.5B-Instruct" \
  --train_type "lora" \
  --dataset "AI-ModelScope/alpaca-gpt4-data-zh#100" \
  --torch_dtype "bfloat16" \
  --num_train_epochs "1" \
  --per_device_train_batch_size "1" \
  --learning_rate "1e-4" \
  --lora_rank "8" \
  --lora_alpha "32" \
  --target_modules "all-linear" \
  --gradient_accumulation_steps "16" \
  --save_steps "50" \
  --save_total_limit "5" \
  --logging_steps "5" \
  --max_length "2048" \
  --eval_strategy "steps" \
  --eval_steps "5" \
  --per_device_eval_batch_size "5" \
  --eval_use_evalscope \
  --eval_datasets "gsm8k" \
  --eval_datasets_args '{"gsm8k": {"few_shot_num": 0}}' \
  --eval_limit "10"
```

Note that the launch command is `sft`, and the evaluation-related parameters include:
- eval_strategy: Evaluation strategy. Defaults to None, following the `save_strategy` policy
- eval_steps: Defaults to None. If an evaluation dataset exists, it follows the `save_steps` policy
- eval_use_evalscope: Whether to use evalscope for evaluation, this parameter needs to be set to enable evaluation
- eval_datasets: Evaluation datasets, multiple datasets can be set, separated by spaces
- eval_datasets_args: Evaluation dataset parameters in JSON format, parameters for multiple datasets can be set
- eval_limit: Number of samples from the evaluation dataset
- eval_generation_config: Model inference configuration during evaluation, in JSON format, default is `{'max_tokens': 512}`

More evaluation examples can be found in [examples](https://github.com/modelscope/ms-swift/tree/main/examples/eval).

## Custom Evaluation Datasets

This framework supports two predefined dataset formats: multiple-choice questions (MCQ) and question-and-answer (QA). The usage process is as follows:

*Note: When using a custom evaluation, the `eval_backend` parameter must be set to `Native`.*

### Multiple-Choice Question Format (MCQ)
This format is suitable for scenarios involving multiple-choice questions, and the evaluation metric is accuracy.

**Data Preparation**

Prepare a CSV file in the multiple-choice question format, structured as follows:

```text
mcq/
β”œβ”€β”€ example_dev.csv  # (Optional) The filename should follow the format `{subset_name}_dev.csv` for few-shot evaluation
└── example_val.csv  # The filename should follow the format `{subset_name}_val.csv` for the actual evaluation data
```

The CSV file should follow this format:

```text
id,question,A,B,C,D,answer
1,Generally speaking, the amino acids that make up animal proteins are____,4 types,22 types,20 types,19 types,C
2,Among the substances present in the blood, which is not a metabolic end product?____,Urea,Uric acid,Pyruvate,Carbon dioxide,C
```

Where:
- `id` is an optional index
- `question` is the question
- `A`, `B`, `C`, `D`, etc. are the options, with a maximum of 10 options
- `answer` is the correct option

**Launching Evaluation**

Run the following command:

```bash
CUDA_VISIBLE_DEVICES=0 \
swift eval \
    --model Qwen/Qwen2.5-0.5B-Instruct \
    --eval_backend Native \
    --infer_backend pt \
    --eval_dataset general_mcq \
    --dataset_args '{"general_mcq": {"local_path": "/path/to/mcq", "subset_list": ["example"]}}'
```

Where:
- `eval_dataset` should be set to `general_mcq`
- `dataset_args` should be set with:
    - `local_path` as the path to the custom dataset folder
    - `subset_list` as the name of the evaluation dataset, taken from the `*_dev.csv` mentioned above

**Running Results**

```text
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Model               | Dataset     | Metric          | Subset   |   Num |   Score | Cat.0   |
+=====================+=============+=================+==========+=======+=========+=========+
| Qwen2-0.5B-Instruct | general_mcq | AverageAccuracy | example  |    12 |  0.5833 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
```

## Question-and-Answer Format (QA)
This format is suitable for scenarios involving question-and-answer, and the evaluation metrics are `ROUGE` and `BLEU`.

**Data Preparation**

Prepare a JSON Lines file in the question-and-answer format, containing one file in the following structure:

```text
qa/
└── example.jsonl
```

The JSON Lines file should follow this format:

```json
{"query": "What is the capital of China?", "response": "The capital of China is Beijing"}
{"query": "What is the highest mountain in the world?", "response": "It is Mount Everest"}
{"query": "Why can't penguins be seen in the Arctic?", "response": "Because most penguins live in Antarctica"}
```

**Launching Evaluation**

Run the following command:

```bash
CUDA_VISIBLE_DEVICES=0 \
swift eval \
    --model Qwen/Qwen2.5-0.5B-Instruct \
    --eval_backend Native \
    --infer_backend pt \
    --eval_dataset general_qa \
    --dataset_args '{"general_qa": {"local_path": "/path/to/qa", "subset_list": ["example"]}}'
```

Where:
- `eval_dataset` should be set to `general_qa`
- `dataset_args` is a JSON string that needs to be set with:
    - `local_path` as the path to the custom dataset folder
    - `subset_list` as the name of the evaluation dataset, taken from the `*.jsonl` mentioned above

**Running Results**

```text
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Model               | Dataset     | Metric          | Subset   |   Num |   Score | Cat.0   |
+=====================+=============+=================+==========+=======+=========+=========+
| Qwen2-0.5B-Instruct | general_qa  | bleu-1          | default  |    12 |  0.2324 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | bleu-2          | default  |    12 |  0.1451 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | bleu-3          | default  |    12 |  0.0625 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | bleu-4          | default  |    12 |  0.0556 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-1-f       | default  |    12 |  0.3441 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-1-p       | default  |    12 |  0.2393 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-1-r       | default  |    12 |  0.8889 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-2-f       | default  |    12 |  0.2062 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-2-p       | default  |    12 |  0.1453 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-2-r       | default  |    12 |  0.6167 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-l-f       | default  |    12 |  0.333  | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-l-p       | default  |    12 |  0.2324 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
| Qwen2-0.5B-Instruct | general_qa  | rouge-l-r       | default  |    12 |  0.8889 | default |
+---------------------+-------------+-----------------+----------+-------+---------+---------+
```