mmred / README.md
ef1e43ce's picture
Upload folder using huggingface_hub
d7963eb verified
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: '*/train/*.arrow'
      - split: val
        path: '*/val/*.arrow'
      - split: test
        path: '*/test/*.arrow'
  - config_name: seq_len_1
    data_files:
      - split: train
        path: seq_len_1/train/*.arrow
      - split: val
        path: seq_len_1/val/*.arrow
      - split: test
        path: seq_len_1/test/*.arrow
  - config_name: seq_len_2
    data_files:
      - split: train
        path: seq_len_2/train/*.arrow
      - split: val
        path: seq_len_2/val/*.arrow
      - split: test
        path: seq_len_2/test/*.arrow
  - config_name: seq_len_4
    data_files:
      - split: train
        path: seq_len_4/train/*.arrow
      - split: val
        path: seq_len_4/val/*.arrow
      - split: test
        path: seq_len_4/test/*.arrow
  - config_name: seq_len_8
    data_files:
      - split: train
        path: seq_len_8/train/*.arrow
      - split: val
        path: seq_len_8/val/*.arrow
      - split: test
        path: seq_len_8/test/*.arrow
  - config_name: seq_len_16
    data_files:
      - split: train
        path: seq_len_16/train/*.arrow
      - split: val
        path: seq_len_16/val/*.arrow
      - split: test
        path: seq_len_16/test/*.arrow
  - config_name: seq_len_32
    data_files:
      - split: test
        path: seq_len_32/test/*.arrow
  - config_name: seq_len_64
    data_files:
      - split: test
        path: seq_len_64/test/*.arrow
  - config_name: seq_len_128
    data_files:
      - split: test
        path: seq_len_128/test/*.arrow

MMReD: A Cross-Modal Benchmark for Dense Reasoning

Overview of MMReD Benchmark

This is the dataset & benchmark accompanying MMReD paper. It was obtained by running generation script in MMReD repository:

python scripts/generate_dataset.py

It only contains textual split of the dataset to save space as images can be deterministically generated from jsons.

To run full evaluation, training, or generate images from jsons for LVLMs, please refer to the repository.

Multi-Modal Controllable Environment = Dense Factual Haystack

MMReD introduces a concept of dense context, aiming at controllable generative evaluation of reasoning over arbitrarily long factually dense scenarios.

It contains 8 splits corresponding to different sequence lengths in environment transitions: [1, 2, 4, ..., 128]. Each split contains 24 evaluation questions with 200/50/50 train/val/test samples per question type.

Different splits grouped by sequence length are available both form evaluation & training, with training samples generated up to sequence length of 16.

from datasets import load_dataset
train_16 = load_dataset("dondo-sss/mmred", "seq_len_128")["train"]
test_128 = load_dataset("dondo-sss/mmred", "seq_len_128")["test"]

Questions are generally divided into 2 groups - resembling standard NIAH evaluation & our introduced Dense Long Context evaluation:

ID Question template Dataset Name
NIAH
FA-FA-R In which room did [C] first appear? room_on_char_first_app
FA-CCFA-R In which room was [C1] when [C2] first appeared in the [R]? char_on_char_first_app
FA-FR-C Who was the first to appear in the [R]? first_at_room
FA-RCFA-C Who was in the [R1] when [C] first appeared in the [R2]? char_at_frame
FA-NRFA-I How many characters were in the [R1] when [C] first appeared in the [R2]? n_room_on_char_first_app
FI-FA-R In which room was [C] at the final step? final_app
FI-CCFA-R In which room was [C1] when [C2] made their final appearance in the [R]? char_on_char_final_app
FI-LR-C Who was the last to appear in the [R]? last_at_room
FI-RCFA-C Who was in the [R1] when [C] made their final appearance in the [R2]? char_at_frame
FI-NRFA-I How many chars were in the [R1] when [C] made their final app in the [R2]? n_room_on_char_final_app
FX-CF-R In which room was [C] at step [X]? room_at_frame
FX-RF-C Who was in the [R] at step [X]? char_at_frame
FX-CCF-C Who was in the same room as [C] at step [X]? char_on_char_at_frame
FX-NCF-I How many other characters were in the same room as [C] at step [X]? n_char_at_frame
FX-NE-I How many rooms were empty at step [X]? n_empty
LC
LC-RE-R Which room was empty for the [comp] steps? room_empty
LC-WS-R In which room did [C] spend the [comp] time? where_spend
LC-CR-R Which room was crowded (three or more people) for the most steps? crowded_room
LC-WHS-C Who spent the [comp] time in the [R]? who_spend
LC-SA-C Who spent the [comp] time alone in the rooms? spend_alone
LC-ST-C With whom did [C] spend the [comp] time together in the same room? spend_together
LC-SR-I How many steps did [C] spend in the [R]? steps_in_room
LC-RV-I How many different rooms did [C] visit? rooms_visited
LC-CC-I How many times did a crowd (three or more people in one room) appear? crowd_count