Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,140 +8,71 @@ pretty_name: BenchLMM
|
|
| 8 |
size_categories:
|
| 9 |
- n<1K
|
| 10 |
---
|
| 11 |
-
# Dataset Card for Dataset Name
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
## Dataset Details
|
| 18 |
|
| 19 |
### Dataset Description
|
| 20 |
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
- **
|
| 26 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 27 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 28 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 29 |
-
- **License:** [More Information Needed]
|
| 30 |
|
| 31 |
### Dataset Sources [optional]
|
| 32 |
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
- **Repository:** [More Information Needed]
|
| 36 |
-
- **Paper [optional]:** [More Information Needed]
|
| 37 |
-
- **Demo [optional]:** [More Information Needed]
|
| 38 |
|
| 39 |
## Uses
|
| 40 |
|
| 41 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
| 42 |
-
|
| 43 |
### Direct Use
|
| 44 |
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
[More Information Needed]
|
| 48 |
-
|
| 49 |
-
### Out-of-Scope Use
|
| 50 |
-
|
| 51 |
-
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
| 52 |
-
|
| 53 |
-
[More Information Needed]
|
| 54 |
|
| 55 |
## Dataset Structure
|
| 56 |
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
|
|
|
|
|
|
| 60 |
|
| 61 |
## Dataset Creation
|
| 62 |
|
| 63 |
### Curation Rationale
|
| 64 |
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
[More Information Needed]
|
| 68 |
|
| 69 |
### Source Data
|
| 70 |
|
| 71 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
| 72 |
-
|
| 73 |
#### Data Collection and Processing
|
| 74 |
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
[More Information Needed]
|
| 78 |
-
|
| 79 |
-
#### Who are the source data producers?
|
| 80 |
-
|
| 81 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
| 82 |
-
|
| 83 |
-
[More Information Needed]
|
| 84 |
-
|
| 85 |
-
### Annotations [optional]
|
| 86 |
-
|
| 87 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
| 88 |
-
|
| 89 |
-
#### Annotation process
|
| 90 |
-
|
| 91 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
| 92 |
-
|
| 93 |
-
[More Information Needed]
|
| 94 |
-
|
| 95 |
-
#### Who are the annotators?
|
| 96 |
-
|
| 97 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
| 98 |
-
|
| 99 |
-
[More Information Needed]
|
| 100 |
-
|
| 101 |
-
#### Personal and Sensitive Information
|
| 102 |
-
|
| 103 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 104 |
-
|
| 105 |
-
[More Information Needed]
|
| 106 |
|
| 107 |
## Bias, Risks, and Limitations
|
| 108 |
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
[More Information Needed]
|
| 112 |
-
|
| 113 |
-
### Recommendations
|
| 114 |
-
|
| 115 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 116 |
-
|
| 117 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
| 118 |
|
| 119 |
## Citation [optional]
|
| 120 |
|
| 121 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
| 122 |
-
|
| 123 |
**BibTeX:**
|
| 124 |
-
|
| 125 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
**APA:**
|
|
|
|
| 128 |
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
## Glossary [optional]
|
| 132 |
-
|
| 133 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
| 134 |
-
|
| 135 |
-
[More Information Needed]
|
| 136 |
-
|
| 137 |
-
## More Information [optional]
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Dataset Card Authors [optional]
|
| 142 |
-
|
| 143 |
-
[More Information Needed]
|
| 144 |
-
|
| 145 |
-
## Dataset Card Contact
|
| 146 |
|
| 147 |
-
|
|
|
|
| 8 |
size_categories:
|
| 9 |
- n<1K
|
| 10 |
---
|
|
|
|
| 11 |
|
| 12 |
+
# Dataset Card for BenchLMM
|
| 13 |
|
| 14 |
+
BenchLMM is a benchmarking dataset focusing on the cross-style visual capability of large multimodal models. It evaluates these models' performance in various visual contexts.
|
| 15 |
|
| 16 |
## Dataset Details
|
| 17 |
|
| 18 |
### Dataset Description
|
| 19 |
|
| 20 |
+
- **Curated by:** Rizhao Cai, Zirui Song, Dayan Guan, Zhenhao Chen, Xing Luo, Chenyu Yi, and Alex Kot.
|
| 21 |
+
- **Funded by [optional]:** Supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
|
| 22 |
+
- **Shared by [optional]:** AIFEG.
|
| 23 |
+
- **Language(s) (NLP):** English.
|
| 24 |
+
- **License:** Apache-2.0.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
### Dataset Sources [optional]
|
| 27 |
|
| 28 |
+
- **Repository:** [GitHub - AIFEG/BenchLMM](https://github.com/AIFEG/BenchLMM)
|
| 29 |
+
- **Paper [optional]:** Cai, R., Song, Z., Guan, D., et al. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv:2312.02896.
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
## Uses
|
| 32 |
|
|
|
|
|
|
|
| 33 |
### Direct Use
|
| 34 |
|
| 35 |
+
The dataset can be used to benchmark large multimodal models, especially focusing on their capability to interpret and respond to different visual styles.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
## Dataset Structure
|
| 38 |
|
| 39 |
+
- **Directory Structure:**
|
| 40 |
+
- `baseline/`: Baseline code for LLaVA and InstructBLIP.
|
| 41 |
+
- `evaluate/`: Python code for model evaluation.
|
| 42 |
+
- `evaluate_results/`: Evaluation results of baseline models.
|
| 43 |
+
- `jsonl/`: JSONL files with questions, image locations, and answers.
|
| 44 |
|
| 45 |
## Dataset Creation
|
| 46 |
|
| 47 |
### Curation Rationale
|
| 48 |
|
| 49 |
+
Developed to assess large multimodal models' performance in diverse visual contexts, helping to understand their capabilities and limitations.
|
|
|
|
|
|
|
| 50 |
|
| 51 |
### Source Data
|
| 52 |
|
|
|
|
|
|
|
| 53 |
#### Data Collection and Processing
|
| 54 |
|
| 55 |
+
The dataset consists of various visual questions and corresponding answers, structured to evaluate multimodal model performance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
## Bias, Risks, and Limitations
|
| 58 |
|
| 59 |
+
Users should consider the specific visual contexts and question types included in the dataset when interpreting model performance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
## Citation [optional]
|
| 62 |
|
|
|
|
|
|
|
| 63 |
**BibTeX:**
|
| 64 |
+
@misc{cai2023benchlmm,
|
| 65 |
+
title={BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models},
|
| 66 |
+
author={Rizhao Cai and Zirui Song and Dayan Guan and Zhenhao Chen and Xing Luo and Chenyu Yi and Alex Kot},
|
| 67 |
+
year={2023},
|
| 68 |
+
eprint={2312.02896},
|
| 69 |
+
archivePrefix={arXiv},
|
| 70 |
+
primaryClass={cs.CV}
|
| 71 |
+
}
|
| 72 |
|
| 73 |
**APA:**
|
| 74 |
+
Cai, R., Song, Z., Guan, D., Chen, Z., Luo, X., Yi, C., & Kot, A. (2023). BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models. arXiv preprint arXiv:2312.02896.
|
| 75 |
|
| 76 |
+
## Acknowledgements
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
|
| 78 |
+
This research is supported in part by the Rapid-Rich Object Search (ROSE) Lab of Nanyang Technological University and the NTU-PKU Joint Research Institute.
|