Scholarus commited on
Commit
7a429db
·
1 Parent(s): 405ef7c

Convert to Parquet format (remove loading script)

Browse files
Files changed (2) hide show
  1. README.md +14 -10
  2. data/train.parquet +3 -0
README.md CHANGED
@@ -37,20 +37,21 @@ The dataset currently includes:
37
  - **3 questions per scene**: one for each image
38
  - **Answer matrix**: the answer of every image to every question
39
 
40
- Data is stored in `dataset_1k_720p_2/image_mapping_with_questions.csv` with images in `dataset_1k_720p_2/images/`.
41
 
42
  ## Hugging Face Dataset Viewer
43
 
44
- This repository is configured as a Hugging Face dataset. The **Preview** tab will automatically:
45
 
46
- 1. Use `MMB_Dataset.py` to load the CSV from `dataset_1k_720p_2/`
47
- 2. Resolve image paths to display thumbnails
48
- 3. Show all fields including:
49
  - Scene IDs
50
- - Three images per example (original + 2 counterfactuals)
51
  - Questions and difficulties
52
  - Complete answer matrix (9 fields)
53
 
 
 
54
  ### Loading from Python
55
 
56
  After pushing this repository to the Hub, load it with:
@@ -58,21 +59,24 @@ After pushing this repository to the Hub, load it with:
58
  ```python
59
  from datasets import load_dataset
60
 
61
- ds = load_dataset("scholo/MMB_dataset", split="train", trust_remote_code=True)
62
  print(ds[0])
63
  ```
64
 
 
 
65
  ## Directory Structure
66
 
67
  ```
68
  MMB-Dataset/
69
- ├── MMB_dataset.py # Hugging Face dataset loading script
70
  ├── README.md # This file
71
  ├── .gitattributes # Git LFS configuration for images
 
 
72
  ├── dataset_1k_720p_2/ # Current dataset run
73
- │ ├── images/ # All PNG images
74
  │ ├── scenes/ # JSON scene descriptions (reference)
75
- │ ├── image_mapping_with_questions.csv # Main data file
76
  │ ├── checkpoint.json # Run metadata
77
  │ └── run_metadata.json # Run metadata
78
  ```
 
37
  - **3 questions per scene**: one for each image
38
  - **Answer matrix**: the answer of every image to every question
39
 
40
+ Data is stored in **Parquet format** (`data/train.parquet`) with images in `dataset_1k_720p_2/images/`.
41
 
42
  ## Hugging Face Dataset Viewer
43
 
44
+ This repository uses the standard **Parquet format** (no custom loading script required). The **Preview** tab will automatically:
45
 
46
+ 1. Load the Parquet file from `data/train.parquet`
47
+ 2. Display all fields including:
 
48
  - Scene IDs
49
+ - Image filenames (original + 2 counterfactuals)
50
  - Questions and difficulties
51
  - Complete answer matrix (9 fields)
52
 
53
+ **Note:** Image columns contain filenames (e.g., `scene_0000_original.png`). The actual images are stored in `dataset_1k_720p_2/images/`.
54
+
55
  ### Loading from Python
56
 
57
  After pushing this repository to the Hub, load it with:
 
59
  ```python
60
  from datasets import load_dataset
61
 
62
+ ds = load_dataset("scholo/MMB_dataset", split="train")
63
  print(ds[0])
64
  ```
65
 
66
+ No `trust_remote_code=True` needed since we use standard Parquet format!
67
+
68
  ## Directory Structure
69
 
70
  ```
71
  MMB-Dataset/
 
72
  ├── README.md # This file
73
  ├── .gitattributes # Git LFS configuration for images
74
+ ├── data/ # Dataset files (Parquet format)
75
+ │ └── train.parquet # Main dataset file
76
  ├── dataset_1k_720p_2/ # Current dataset run
77
+ │ ├── images/ # All PNG images (referenced by Parquet)
78
  │ ├── scenes/ # JSON scene descriptions (reference)
79
+ │ ├── image_mapping_with_questions.csv # Original CSV (source)
80
  │ ├── checkpoint.json # Run metadata
81
  │ └── run_metadata.json # Run metadata
82
  ```
data/train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a39ca89d19e06768c08b1b217dd5814336a570c76898aef79bca4c44a72213b
3
+ size 49238