Deva8 commited on
Commit
f8804ef
·
verified ·
1 Parent(s): 983217f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -68
README.md CHANGED
@@ -48,95 +48,53 @@ The dataset was generated using the following filtering pipeline:
48
 
49
  ```
50
  .
51
- ├── main_metadata.csv # Metadata (CSV) for loading
52
- ├── gen_vqa_v2-images.zip # ZIP containing images and additional files
53
  │ └── gen_vqa_v22/
54
- │ ├── images/ # 135k+ COCO images (JPG)
55
- │ ├── metadata.csv # Original metadata (not used by dataset loader)
56
- │ └── qa_pairs.json # Full QA pairs with all annotations
 
57
  └── README.md
58
  ```
59
 
60
- ## Metadata Fields
61
-
62
- - `image_id`: Original COCO Image ID.
63
- - `question_id`: Original VQA v2 Question ID.
64
- - `question`: The natural language question.
65
- - `answer`: The curated ground-truth answer.
66
- - `file_name`: Path to image relative to extracted zip root.
67
 
68
  ## Usage
69
 
70
- ### Method 1: Load with Manual Extraction (Recommended for Dataset Viewer)
71
-
72
- Since the dataset uses a zip file for images, you'll need to manually extract it first:
73
 
74
  ```python
75
  from datasets import load_dataset
76
- from huggingface_hub import hf_hub_download
77
- import zipfile
78
- import os
79
-
80
- # Download the zip file
81
- zip_path = hf_hub_download(
82
- repo_id="Deva8/Generative-VQA-V2-Curated",
83
- filename="gen_vqa_v2-images.zip",
84
- repo_type="dataset"
85
- )
86
-
87
- # Extract it
88
- extract_dir = "./gen_vqa_data"
89
- with zipfile.ZipFile(zip_path, 'r') as zip_ref:
90
- zip_ref.extractall(extract_dir)
91
-
92
- # Now load the dataset
93
- dataset = load_dataset(
94
- "Deva8/Generative-VQA-V2-Curated",
95
- data_files="main_metadata.csv"
96
- )
97
-
98
- # The dataset loader will now be able to find the images
99
  for item in dataset['train']:
100
  print(f"Q: {item['question']}")
101
  print(f"A: {item['answer']}")
102
- # Note: You'll need to manually load images using the file_name path
103
  ```
104
 
105
- ### Method 2: Direct CSV Loading
106
 
107
  ```python
108
  import pandas as pd
109
- from PIL import Image
110
- import os
111
 
112
- # Load metadata
113
  df = pd.read_csv("hf://datasets/Deva8/Generative-VQA-V2-Curated/main_metadata.csv")
114
-
115
- # After extracting gen_vqa_v2-images.zip to a local directory
116
- base_path = "./gen_vqa_data"
117
-
118
- # Load an example
119
- row = df.iloc[0]
120
- img_path = os.path.join(base_path, row['file_name'])
121
- img = Image.open(img_path)
122
-
123
- print(f"Question: {row['question']}")
124
- print(f"Answer: {row['answer']}")
125
- img.show()
126
  ```
127
 
128
- ### Check Answer Distribution
129
-
130
- ```python
131
- import pandas as pd
132
-
133
- df = pd.read_csv("hf://datasets/Deva8/Generative-VQA-V2-Curated/main_metadata.csv")
134
- print(df['answer'].value_counts().head(10)) # Top 10 most common answers
135
- ```
136
-
137
- ## Known Limitations
138
 
139
- **Dataset Viewer**: The HuggingFace dataset viewer may not work automatically because images are stored in a separate zip file. Users should manually extract the zip and load images programmatically as shown above.
 
 
 
 
140
 
141
  ## License & Attribution
142
 
@@ -147,8 +105,6 @@ This dataset is a derivative work of the VQA v2 Dataset and the COCO Dataset.
147
 
148
  ## Citation
149
 
150
- If you use this dataset in your research or project, please cite it as follows:
151
-
152
  ```bibtex
153
  @misc{devarajan_genvqa_2026,
154
  author = {Devarajan},
 
48
 
49
  ```
50
  .
51
+ ├── main_metadata.csv # PRIMARY DATA FILE - Use this!
52
+ ├── gen_vqa_v2-images.zip # Images (10GB)
53
  │ └── gen_vqa_v22/
54
+ │ ├── images/ # 135k+ COCO images
55
+ │ ├── metadata.csv # (IGNORE - old version)
56
+ │ └── qa_pairs.json # (IGNORE - raw annotations)
57
+ ├── Generative-VQA-V2-Curated.py # Custom loading script
58
  └── README.md
59
  ```
60
 
61
+ **Note:** The `metadata.csv` and `qa_pairs.json` files inside the zip are NOT used by the dataset loader. The dataset uses `main_metadata.csv` at the repository root.
 
 
 
 
 
 
62
 
63
  ## Usage
64
 
65
+ ### Load with HuggingFace Datasets (Recommended)
 
 
66
 
67
  ```python
68
  from datasets import load_dataset
69
+
70
+ # Load the dataset using the custom loading script
71
+ dataset = load_dataset("Deva8/Generative-VQA-V2-Curated")
72
+
73
+ # Access examples
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
  for item in dataset['train']:
75
  print(f"Q: {item['question']}")
76
  print(f"A: {item['answer']}")
77
+ item['image'].show()
78
  ```
79
 
80
+ ### Load Metadata Only
81
 
82
  ```python
83
  import pandas as pd
 
 
84
 
 
85
  df = pd.read_csv("hf://datasets/Deva8/Generative-VQA-V2-Curated/main_metadata.csv")
86
+ print(df.head())
87
+ print(f"\nDataset size: {len(df)} examples")
88
+ print(f"\nTop 10 answers:\n{df['answer'].value_counts().head(10)}")
 
 
 
 
 
 
 
 
 
89
  ```
90
 
91
+ ## Metadata Fields
 
 
 
 
 
 
 
 
 
92
 
93
+ - `image_id`: Original COCO Image ID
94
+ - `question_id`: Original VQA v2 Question ID
95
+ - `question`: The natural language question
96
+ - `answer`: The curated ground-truth answer
97
+ - `file_name`: Path to image (relative to extracted zip)
98
 
99
  ## License & Attribution
100
 
 
105
 
106
  ## Citation
107
 
 
 
108
  ```bibtex
109
  @misc{devarajan_genvqa_2026,
110
  author = {Devarajan},