waltsun commited on
Commit
e1053e4
·
verified ·
1 Parent(s): c716d95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -32,11 +32,11 @@ pretty_name: MOAT
32
 
33
  **MOAT** (**M**ultimodal model **O**f **A**ll **T**rades) is a challenging benchmark for large multimodal models (LMMs). It consists of vision language (VL) tasks that require the LMM to integrate several VL capabilities and engage in human-like generalist visual problem solving. Moreover, many tasks in **MOAT** focus on LMMs' capability to ground complex text and visual instructions, which is crucial for the application of LMMs in-the-wild. Developing on the VL capability taxonomies proposed in previous benchmark papers, we define 10 fundamental VL capabilities in **MOAT**.
34
 
35
- Please check out our [GitHub repo](https://cambrian-yzt.github.io/MOAT/) for further information.
36
 
37
  ## Usage
38
 
39
- Please check out our [GitHub repo](https://cambrian-yzt.github.io/MOAT/) for detail usage.
40
 
41
  **Run Your Own Evaluation**
42
 
@@ -47,7 +47,7 @@ from datasets import load_dataset
47
  dataset = load_dataset("waltsun/MOAT", split='test')
48
  ```
49
 
50
- As some questions are formatted as interleaved text and image(s), we recommend referring to the `./inference/eval_API.py` file in our [GitHub repo](https://cambrian-yzt.github.io/MOAT/) for the correct way to query the LMM.
51
 
52
  **Column Description**
53
 
 
32
 
33
  **MOAT** (**M**ultimodal model **O**f **A**ll **T**rades) is a challenging benchmark for large multimodal models (LMMs). It consists of vision language (VL) tasks that require the LMM to integrate several VL capabilities and engage in human-like generalist visual problem solving. Moreover, many tasks in **MOAT** focus on LMMs' capability to ground complex text and visual instructions, which is crucial for the application of LMMs in-the-wild. Developing on the VL capability taxonomies proposed in previous benchmark papers, we define 10 fundamental VL capabilities in **MOAT**.
34
 
35
+ Please check out our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for further information.
36
 
37
  ## Usage
38
 
39
+ Please check out our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for detail usage.
40
 
41
  **Run Your Own Evaluation**
42
 
 
47
  dataset = load_dataset("waltsun/MOAT", split='test')
48
  ```
49
 
50
+ As some questions are formatted as interleaved text and image(s), we recommend referring to the `./inference/eval_API.py` file in our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for the correct way to query the LMM.
51
 
52
  **Column Description**
53