Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -32,11 +32,11 @@ pretty_name: MOAT
|
|
| 32 |
|
| 33 |
**MOAT** (**M**ultimodal model **O**f **A**ll **T**rades) is a challenging benchmark for large multimodal models (LMMs). It consists of vision language (VL) tasks that require the LMM to integrate several VL capabilities and engage in human-like generalist visual problem solving. Moreover, many tasks in **MOAT** focus on LMMs' capability to ground complex text and visual instructions, which is crucial for the application of LMMs in-the-wild. Developing on the VL capability taxonomies proposed in previous benchmark papers, we define 10 fundamental VL capabilities in **MOAT**.
|
| 34 |
|
| 35 |
-
Please check out our [GitHub repo](https://
|
| 36 |
|
| 37 |
## Usage
|
| 38 |
|
| 39 |
-
Please check out our [GitHub repo](https://
|
| 40 |
|
| 41 |
**Run Your Own Evaluation**
|
| 42 |
|
|
@@ -47,7 +47,7 @@ from datasets import load_dataset
|
|
| 47 |
dataset = load_dataset("waltsun/MOAT", split='test')
|
| 48 |
```
|
| 49 |
|
| 50 |
-
As some questions are formatted as interleaved text and image(s), we recommend referring to the `./inference/eval_API.py` file in our [GitHub repo](https://
|
| 51 |
|
| 52 |
**Column Description**
|
| 53 |
|
|
|
|
| 32 |
|
| 33 |
**MOAT** (**M**ultimodal model **O**f **A**ll **T**rades) is a challenging benchmark for large multimodal models (LMMs). It consists of vision language (VL) tasks that require the LMM to integrate several VL capabilities and engage in human-like generalist visual problem solving. Moreover, many tasks in **MOAT** focus on LMMs' capability to ground complex text and visual instructions, which is crucial for the application of LMMs in-the-wild. Developing on the VL capability taxonomies proposed in previous benchmark papers, we define 10 fundamental VL capabilities in **MOAT**.
|
| 34 |
|
| 35 |
+
Please check out our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for further information.
|
| 36 |
|
| 37 |
## Usage
|
| 38 |
|
| 39 |
+
Please check out our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for detail usage.
|
| 40 |
|
| 41 |
**Run Your Own Evaluation**
|
| 42 |
|
|
|
|
| 47 |
dataset = load_dataset("waltsun/MOAT", split='test')
|
| 48 |
```
|
| 49 |
|
| 50 |
+
As some questions are formatted as interleaved text and image(s), we recommend referring to the `./inference/eval_API.py` file in our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for the correct way to query the LMM.
|
| 51 |
|
| 52 |
**Column Description**
|
| 53 |
|