Improve dataset card: Add sample usage instructions
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,25 +1,24 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-nc-sa-4.0
|
| 3 |
-
task_categories:
|
| 4 |
-
- image-text-to-text
|
| 5 |
-
- visual-question-answering
|
| 6 |
language:
|
| 7 |
- zh
|
| 8 |
- en
|
| 9 |
- es
|
| 10 |
- fr
|
| 11 |
- ja
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
tags:
|
| 13 |
- Math
|
| 14 |
- Physics
|
| 15 |
- Chemistry
|
| 16 |
- Biology
|
| 17 |
- Multilingual
|
| 18 |
-
size_categories:
|
| 19 |
-
- 1K<n<10K
|
| 20 |
---
|
| 21 |
|
| 22 |
-
|
| 23 |
# MME-SCI: A Comprehensive and Challenging Science Benchmark for Multimodal Large Language Models
|
| 24 |
|
| 25 |
MME-SCI is a comprehensive multimodal benchmark designed to evaluate the scientific reasoning capabilities of Multimodal Large Language Models (MLLMs). It addresses key limitations of existing benchmarks by focusing on multilingual adaptability, comprehensive modality coverage, and fine-grained knowledge point annotation.
|
|
@@ -65,6 +64,55 @@ The dataset was built through a rigorous 4-step process:
|
|
| 65 |
4. **Post-Audit**: Cross-validation by 3 reviewers to ensure quality.
|
| 66 |
|
| 67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
## ๐ Citation
|
| 69 |
|
| 70 |
If you use MME-SCI in your research, please cite:
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- zh
|
| 4 |
- en
|
| 5 |
- es
|
| 6 |
- fr
|
| 7 |
- ja
|
| 8 |
+
license: cc-by-nc-sa-4.0
|
| 9 |
+
size_categories:
|
| 10 |
+
- 1K<n<10K
|
| 11 |
+
task_categories:
|
| 12 |
+
- image-text-to-text
|
| 13 |
+
- visual-question-answering
|
| 14 |
tags:
|
| 15 |
- Math
|
| 16 |
- Physics
|
| 17 |
- Chemistry
|
| 18 |
- Biology
|
| 19 |
- Multilingual
|
|
|
|
|
|
|
| 20 |
---
|
| 21 |
|
|
|
|
| 22 |
# MME-SCI: A Comprehensive and Challenging Science Benchmark for Multimodal Large Language Models
|
| 23 |
|
| 24 |
MME-SCI is a comprehensive multimodal benchmark designed to evaluate the scientific reasoning capabilities of Multimodal Large Language Models (MLLMs). It addresses key limitations of existing benchmarks by focusing on multilingual adaptability, comprehensive modality coverage, and fine-grained knowledge point annotation.
|
|
|
|
| 64 |
4. **Post-Audit**: Cross-validation by 3 reviewers to ensure quality.
|
| 65 |
|
| 66 |
|
| 67 |
+
## ๐ Sample Usage
|
| 68 |
+
|
| 69 |
+
Follow these steps to set up your environment and start using the MME-SCI benchmark.
|
| 70 |
+
|
| 71 |
+
### Environment Setup
|
| 72 |
+
|
| 73 |
+
First, configure the required dependencies using Conda:
|
| 74 |
+
|
| 75 |
+
```bash
|
| 76 |
+
# Navigate to the project root directory
|
| 77 |
+
cd MME_SCI/
|
| 78 |
+
|
| 79 |
+
# Create a dedicated Conda environment with Python 3.10
|
| 80 |
+
conda create -n mmesci python=3.10 -y
|
| 81 |
+
|
| 82 |
+
# Activate the environment
|
| 83 |
+
conda activate mmesci
|
| 84 |
+
|
| 85 |
+
# Install required packages
|
| 86 |
+
pip install -r requirements.txt
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
### Running Evaluation
|
| 90 |
+
|
| 91 |
+
Once your model is deployed, follow these steps to run the full evaluation pipeline.
|
| 92 |
+
|
| 93 |
+
#### Preparations
|
| 94 |
+
|
| 95 |
+
Before starting, update the **API configuration** in the following files to match your deployed model:
|
| 96 |
+
- `vllm_localapi_eval.py`
|
| 97 |
+
- `get_judge_res.py`
|
| 98 |
+
- `run_vllm_api_eval_with_metrices.sh`
|
| 99 |
+
|
| 100 |
+
Adjust parameters like `model name`, `port`, and `API key` to ensure connectivity with your vllm server.
|
| 101 |
+
|
| 102 |
+
#### One-Stop Evaluation
|
| 103 |
+
|
| 104 |
+
Run the following script to execute the complete evaluation pipeline, including data processing, model querying, and metric calculation:
|
| 105 |
+
|
| 106 |
+
```bash
|
| 107 |
+
# Navigate to the evaluation directory
|
| 108 |
+
cd model_eval/
|
| 109 |
+
|
| 110 |
+
# Execute the evaluation script
|
| 111 |
+
bash run_vllm_api_eval_with_metrices.sh
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
This script will automatically process the data, query the deployed model, and compute evaluation metrics for the MME-SCI benchmark.
|
| 115 |
+
|
| 116 |
## ๐ Citation
|
| 117 |
|
| 118 |
If you use MME-SCI in your research, please cite:
|