# MME-CC MME-CC data directory for the paper **MME-CC: A Challenging Multi-Modal Evaluation Benchmark of Cognitive Capacity**. - Paper: https://arxiv.org/abs/2511.03146 - Total samples: `1173` ## Directory Structure - `MME_CC.json`: merged annotations for all tasks (recommended entry point) - `*.json`: annotations for each subtask - Subdirectories (e.g., `Maze10/`, `Jigsaw_Puzzle/`): corresponding image files The paper organizes tasks into three cognitive categories (Spatial / Geometric / Visual Knowledge Reasoning). This directory currently contains 12 subtask annotation files (including two maze variants: `Maze06` and `Maze10`). ## Minimal Usage Example ```python import json with open("MME_CC/MME_CC.json", "r", encoding="utf-8") as f: data = json.load(f) item = data[0] print(item["id"]) print(item["prompt"]) print(item["image_list"]) # image paths relative to MME_CC/ ``` ## Citation @misc{zhang2025mmeccchallengingmultimodalevaluation, title={MME-CC: A Challenging Multi-Modal Evaluation Benchmark of Cognitive Capacity}, author={Kaiyuan Zhang and Chenghao Yang and Zhoufutu Wen and Sihang Yuan and Qiuyue Wang and Chaoyi Huang and Guosheng Zhu and He Wang and Huawenyu Lu and Jianing Wen and Jianpeng Jiao and Lishu Luo and Longxiang Liu and Sijin Wu and Xiaolei Zhu and Xuanliang Zhang and Yu Liu and Ge Zhang and Yi Lin and Guang Shi and Chaoyou Fu and Wenhao Huang}, year={2025}, eprint={2511.03146}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2511.03146}, }