WayBob commited on
Commit
823cc2a
·
verified ·
1 Parent(s): 2a61dce

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ xview2_train_tier3_sharegpt.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,317 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ - image-classification
6
+ - image-segmentation
7
+ language:
8
+ - en
9
+ - zh
10
+ - ja
11
+ tags:
12
+ - disaster-recognition
13
+ - satellite-imagery
14
+ - remote-sensing
15
+ - vision-language
16
+ - multi-modal
17
+ - xview2
18
+ size_categories:
19
+ - 10K<n<100K
20
+ pretty_name: xView2 Multi-Language Disaster Recognition Dataset
21
+ ---
22
+
23
+ # xView2 Multi-Language Disaster Recognition Dataset
24
+
25
+ This dataset is derived from the xBD (xView2) Building Damage Assessment Dataset and has been reformatted for Vision-Language Model (VLM) training with multi-language support.
26
+
27
+ ## 📊 Dataset Overview
28
+
29
+ This dataset contains satellite imagery paired with multi-language conversational annotations for disaster recognition tasks. It supports three languages: **English**, **Chinese (中文)**, and **Japanese (日本語)**.
30
+
31
+ ### Dataset Splits
32
+
33
+ - **Training Set (tier3)**: 9,168 image pairs → 55,008 conversations
34
+ - **Test Set**: 933 image pairs → 5,598 conversations
35
+ - **Total**: 10,101 image pairs → 60,606 multi-language conversations
36
+
37
+ Each image pair consists of:
38
+ - Pre-disaster satellite image
39
+ - Post-disaster satellite image
40
+ - Corresponding segmentation masks
41
+ - Building damage labels
42
+ - Metadata (capture date, sun position, sensor info)
43
+
44
+ ## 🗂️ Dataset Structure
45
+
46
+ ### Downloadable Files (Available on HuggingFace)
47
+
48
+ The dataset is provided as compressed archives to facilitate downloading:
49
+
50
+ ```
51
+ xview2/
52
+ ├── xview2_train.tar.gz # Training split (8.04 GB compressed)
53
+ ├── xview2_tier3.tar.gz # Additional training data (17.79 GB compressed)
54
+ ├── xview2_test.tar.gz # Test split (2.67 GB compressed)
55
+ ├── xview2_train_tier3.json # Training metadata
56
+ ├── xview2_test.json # Test metadata
57
+ ├── xview2_train_tier3_sharegpt.json # Training conversations (ShareGPT format)
58
+ ├── xview2_test_sharegpt.json # Test conversations (ShareGPT format)
59
+ ├── verify_dataset.py # Dataset integrity verification script
60
+ ├── README.md # This file
61
+ └── samples/images/ # Sample images for preview
62
+ ├── guatemala-volcano_00000000_pre_disaster.png
63
+ ├── guatemala-volcano_00000000_post_disaster.png
64
+ ├── hurricane-florence_00000004_post_disaster.png
65
+ └── santa-rosa-wildfire_00000000_post_disaster.png
66
+ ```
67
+
68
+ ### After Extraction
69
+
70
+ Once you extract the compressed archives, the structure will be:
71
+
72
+ ```
73
+ xview2/
74
+ ├── train/ # Training split (extracted from xview2_train.tar.gz)
75
+ │ ├── images/ # Satellite images (pre/post disaster)
76
+ │ ├── masks/ # Segmentation masks
77
+ │ ├── color_masks/ # Visualization masks
78
+ │ └── labels/ # Building annotations (JSON)
79
+ ├── tier3/ # Additional training data (extracted from xview2_tier3.tar.gz)
80
+ │ ├── images/
81
+ │ ├── masks/
82
+ │ ├── color_masks/
83
+ │ └── labels/
84
+ ├── test/ # Test split (extracted from xview2_test.tar.gz)
85
+ │ ├── images/
86
+ │ ├── masks/
87
+ │ ├── color_masks/
88
+ │ └── labels/
89
+ └── ... (metadata and conversation files)
90
+ ```
91
+
92
+ ## 🌍 Disaster Types
93
+
94
+ The dataset covers 6 types of natural disasters:
95
+
96
+ | Type | English | 中文 | 日本語 | Examples |
97
+ |------|---------|------|--------|----------|
98
+ | volcano | Volcano | 火山 | 火山 | Guatemala volcano |
99
+ | flooding | Flooding | 洪水 | 洪水 | Hurricane Florence, Hurricane Harvey |
100
+ | wind | Wind damage | 风灾 | 風災 | Hurricane Matthew, Hurricane Michael |
101
+ | earthquake | Earthquake | 地震 | 地震 | Mexico earthquake |
102
+ | tsunami | Tsunami | 海啸 | 津波 | Palu tsunami |
103
+ | fire | Fire | 火灾 | 火災 | Santa Rosa wildfire, SoCal fire |
104
+
105
+ ## 🖼️ Sample Images
106
+
107
+ The `samples/images/` directory contains example images for preview:
108
+
109
+ - **Guatemala Volcano (Pre-disaster)**: `guatemala-volcano_00000000_pre_disaster.png`
110
+ - **Guatemala Volcano (Post-disaster)**: `guatemala-volcano_00000000_post_disaster.png`
111
+ - **Hurricane Florence (Post-disaster)**: `hurricane-florence_00000004_post_disaster.png`
112
+ - **Santa Rosa Wildfire (Post-disaster)**: `santa-rosa-wildfire_00000000_post_disaster.png`
113
+
114
+ ## 💬 Conversation Format
115
+
116
+ The dataset uses ShareGPT format with two-turn conversations:
117
+
118
+ ### Real Example: Guatemala Volcano (Post-disaster - Chinese)
119
+
120
+ **Image**: `train/images/guatemala-volcano_00000000_post_disaster.png` (see sample above)
121
+
122
+ ```json
123
+ {
124
+ "id": "xview2_train_000000_post_zh",
125
+ "images": ["data/xview2/train/images/guatemala-volcano_00000000_post_disaster.png"],
126
+ "messages": [
127
+ {
128
+ "from": "human",
129
+ "value": "<image>\n这张图片中发生了灾害吗?"
130
+ },
131
+ {
132
+ "from": "gpt",
133
+ "value": "是的,这张图片中发生了灾害。这是灾后图像。"
134
+ },
135
+ {
136
+ "from": "human",
137
+ "value": "这张图片中发生了什么类型的灾害?"
138
+ },
139
+ {
140
+ "from": "gpt",
141
+ "value": "这张图片中发生了火山灾害。"
142
+ }
143
+ ]
144
+ }
145
+ ```
146
+
147
+ ### Real Example: Guatemala Volcano (Pre-disaster - English)
148
+
149
+ **Image**: `train/images/guatemala-volcano_00000000_pre_disaster.png` (see sample above)
150
+
151
+ ```json
152
+ {
153
+ "id": "xview2_train_000000_pre_en",
154
+ "images": ["data/xview2/train/images/guatemala-volcano_00000000_pre_disaster.png"],
155
+ "messages": [
156
+ {
157
+ "from": "human",
158
+ "value": "<image>\nHas a disaster occurred in this image?"
159
+ },
160
+ {
161
+ "from": "gpt",
162
+ "value": "No, no disaster has occurred in this image. This is a pre-disaster image."
163
+ },
164
+ {
165
+ "from": "human",
166
+ "value": "What type of disaster occurred in this image?"
167
+ },
168
+ {
169
+ "from": "gpt",
170
+ "value": "No disaster has occurred. This image was taken before the disaster."
171
+ }
172
+ ]
173
+ }
174
+ ```
175
+
176
+ ### Real Example: Guatemala Volcano (Post-disaster - Japanese)
177
+
178
+ ```json
179
+ {
180
+ "id": "xview2_train_000000_post_ja",
181
+ "images": ["data/xview2/train/images/guatemala-volcano_00000000_post_disaster.png"],
182
+ "messages": [
183
+ {
184
+ "from": "human",
185
+ "value": "<image>\nこの画像では災害が発生していますか?"
186
+ },
187
+ {
188
+ "from": "gpt",
189
+ "value": "はい、この画像では災害が発生しています。これは災害後の画像です。"
190
+ },
191
+ {
192
+ "from": "human",
193
+ "value": "この画像ではどのような種類の災害が発生しましたか?"
194
+ },
195
+ {
196
+ "from": "gpt",
197
+ "value": "この画像では火山災害が発生しました。"
198
+ }
199
+ ]
200
+ }
201
+ ```
202
+
203
+ ## 📚 Original Dataset Citation
204
+
205
+ This dataset is based on the **xBD (xView2) Dataset**:
206
+
207
+ ```bibtex
208
+ @InProceedings{Gupta_2019_CVPR_Workshops,
209
+ author = {Gupta, Ritwik and Goodman, Bryce and Patel, Nirav and Hosfelt, Ricky and Sajeev, Sandra and Heim, Eric and Doshi, Jigar and Lucas, Keane and Choset, Howie and Gaston, Matthew},
210
+ title = {Creating xBD: A Dataset for Assessing Building Damage from Satellite Imagery},
211
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
212
+ month = {June},
213
+ year = {2019},
214
+ pages = {10-17}
215
+ }
216
+ ```
217
+
218
+ **Paper Abstract**: xBD is a large-scale dataset for the advancement of change detection and building damage assessment for humanitarian assistance and disaster recovery research. The dataset provides pre- and post-event multi-band satellite imagery from a variety of disaster events with building polygons, classification labels for damage types, ordinal labels of damage level, and corresponding satellite metadata. xBD contains ~700,000 building annotations across over 5,000 km² of imagery from 15 countries.
219
+
220
+ ## 🔗 Data Source
221
+
222
+ - **Original Dataset**: [https://xview2.org/dataset](https://xview2.org/dataset)
223
+ - **License**: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
224
+
225
+ ## 📋 License
226
+
227
+ This derivative dataset follows the original license:
228
+
229
+ **CC BY-NC-SA 4.0** - You are free to:
230
+ - **Share** — copy and redistribute the material in any medium or format
231
+ - **Adapt** — remix, transform, and build upon the material
232
+
233
+ Under the following terms:
234
+ - **Attribution** — You must give appropriate credit, provide a link to the license, and indicate if changes were made
235
+ - **NonCommercial** — You may not use the material for commercial purposes
236
+ - **ShareAlike** — If you remix, transform, or build upon the material, you must distribute your contributions under the same license
237
+
238
+ ## 🎯 Use Cases
239
+
240
+ This dataset is suitable for:
241
+
242
+ 1. **Vision-Language Model Training**: Multi-modal models that understand disaster imagery
243
+ 2. **Multi-language AI Systems**: Models that can communicate about disasters in multiple languages
244
+ 3. **Disaster Assessment**: Automated systems for rapid disaster type identification
245
+ 4. **Change Detection**: Pre/post disaster image comparison
246
+ 5. **Humanitarian AI**: Applications for disaster response and recovery
247
+
248
+ ## 📦 How to Use
249
+
250
+ ### Step 1: Download and Extract
251
+
252
+ ```bash
253
+ # Download from HuggingFace, then extract
254
+ tar -xzf xview2_train.tar.gz
255
+ tar -xzf xview2_tier3.tar.gz
256
+ tar -xzf xview2_test.tar.gz
257
+ ```
258
+
259
+ ### Step 2: Verify Dataset Integrity
260
+
261
+ ```bash
262
+ python verify_dataset.py
263
+ ```
264
+
265
+ **Expected Output**:
266
+ ```
267
+ Verifying dataset integrity...
268
+ ✅ Dataset is ready
269
+ ```
270
+
271
+ For detailed verification report:
272
+ ```bash
273
+ python verify_dataset.py --verbose
274
+ ```
275
+
276
+ ### Step 3: Load and Use
277
+
278
+ ```python
279
+ import json
280
+ from PIL import Image
281
+
282
+ # Load conversations
283
+ with open('xview2_train_tier3_sharegpt.json', 'r', encoding='utf-8') as f:
284
+ conversations = json.load(f)
285
+
286
+ # Get first conversation
287
+ conv = conversations[0]
288
+
289
+ # Load image
290
+ image = Image.open(conv['images'][0])
291
+
292
+ # Access conversation
293
+ print(conv['messages'][0]['value']) # Question 1
294
+ print(conv['messages'][1]['value']) # Answer 1
295
+ ```
296
+
297
+ ## 📚 Citation
298
+
299
+ Please cite the original xBD dataset:
300
+
301
+ ```bibtex
302
+ @InProceedings{Gupta_2019_CVPR_Workshops,
303
+ author = {Gupta, Ritwik and Goodman, Bryce and Patel, Nirav and Hosfelt, Ricky and Sajeev, Sandra and Heim, Eric and Doshi, Jigar and Lucas, Keane and Choset, Howie and Gaston, Matthew},
304
+ title = {Creating xBD: A Dataset for Assessing Building Damage from Satellite Imagery},
305
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
306
+ month = {June},
307
+ year = {2019},
308
+ pages = {10-17}
309
+ }
310
+ ```
311
+
312
+ ## 📋 License
313
+
314
+ **CC BY-NC-SA 4.0** - [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
315
+
316
+ Original dataset: [https://xview2.org/dataset](https://xview2.org/dataset)
317
+
UPLOAD_INSTRUCTIONS.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HuggingFace Upload Instructions
2
+
3
+ ## Step 1: Install HuggingFace CLI
4
+
5
+ ```bash
6
+ cd /home/ohya-bob/Documents/DisasterSynth
7
+ source .venv/bin/activate
8
+ uv pip install huggingface-hub[cli]
9
+ ```
10
+
11
+ ## Step 2: Login to HuggingFace
12
+
13
+ ```bash
14
+ huggingface-cli login
15
+ ```
16
+
17
+ Enter your HuggingFace token when prompted.
18
+
19
+ ## Step 3: Create Dataset Repository
20
+
21
+ Go to https://huggingface.co/new-dataset and create a new dataset repository.
22
+
23
+ Example name: `your-username/xview2-disaster-vlm`
24
+
25
+ ## Step 4: Upload Dataset
26
+
27
+ ```bash
28
+ cd /home/ohya-bob/Documents/DisasterSynth/data/xview2/zipped
29
+
30
+ # Upload all files to your dataset repository
31
+ huggingface-cli upload your-username/xview2-disaster-vlm . . --repo-type dataset
32
+ ```
33
+
34
+ **Note**: Replace `your-username/xview2-disaster-vlm` with your actual repository name.
35
+
36
+ ## Alternative: Upload Large Files Individually
37
+
38
+ If the upload times out, upload large files separately:
39
+
40
+ ```bash
41
+ cd /home/ohya-bob/Documents/DisasterSynth/data/xview2/zipped
42
+
43
+ # Upload compressed archives
44
+ huggingface-cli upload your-username/xview2-disaster-vlm xview2_train.tar.gz xview2_train.tar.gz --repo-type dataset
45
+ huggingface-cli upload your-username/xview2-disaster-vlm xview2_tier3.tar.gz xview2_tier3.tar.gz --repo-type dataset
46
+ huggingface-cli upload your-username/xview2-disaster-vlm xview2_test.tar.gz xview2_test.tar.gz --repo-type dataset
47
+
48
+ # Upload JSON files
49
+ huggingface-cli upload your-username/xview2-disaster-vlm xview2_train_tier3_sharegpt.json xview2_train_tier3_sharegpt.json --repo-type dataset
50
+ huggingface-cli upload your-username/xview2-disaster-vlm xview2_test_sharegpt.json xview2_test_sharegpt.json --repo-type dataset
51
+ huggingface-cli upload your-username/xview2-disaster-vlm xview2_train_tier3.json xview2_train_tier3.json --repo-type dataset
52
+ huggingface-cli upload your-username/xview2-disaster-vlm xview2_test.json xview2_test.json --repo-type dataset
53
+
54
+ # Upload documentation and scripts
55
+ huggingface-cli upload your-username/xview2-disaster-vlm README.md README.md --repo-type dataset
56
+ huggingface-cli upload your-username/xview2-disaster-vlm verify_dataset.py verify_dataset.py --repo-type dataset
57
+
58
+ # Upload samples folder
59
+ huggingface-cli upload your-username/xview2-disaster-vlm samples samples --repo-type dataset
60
+ ```
61
+
62
+ ## Step 5: Set Repository Metadata
63
+
64
+ After upload, edit your dataset card on HuggingFace:
65
+
66
+ - **License**: CC BY-NC-SA 4.0
67
+ - **Tags**: `computer-vision`, `disaster-detection`, `multi-language`, `satellite-imagery`, `xview2`
68
+ - **Languages**: English, Chinese, Japanese
69
+
70
+ ## Files to Upload (Total: ~29.2 GB)
71
+
72
+ - [x] xview2_train.tar.gz (8.04 GB)
73
+ - [x] xview2_tier3.tar.gz (17.79 GB)
74
+ - [x] xview2_test.tar.gz (2.67 GB)
75
+ - [x] xview2_train_tier3_sharegpt.json (~1.26 GB)
76
+ - [x] xview2_test_sharegpt.json (~14 MB)
77
+ - [x] xview2_train_tier3.json (~195 MB)
78
+ - [x] xview2_test.json (~20 MB)
79
+ - [x] README.md
80
+ - [x] verify_dataset.py
81
+ - [x] samples/images/ (4 images)
82
+
83
+ ## Troubleshooting
84
+
85
+ If upload fails:
86
+ - Try uploading smaller files first
87
+ - Use `--resume` flag to resume interrupted uploads
88
+ - Split very large files if needed
89
+
samples/images/guatemala-volcano_00000000_post_disaster.png ADDED

Git LFS Details

  • SHA256: 61cd93ca54a9f6ae22617f8258505d8143c0d9b93866de3c6a498c1f569396e4
  • Pointer size: 132 Bytes
  • Size of remote file: 1.59 MB
samples/images/guatemala-volcano_00000000_pre_disaster.png ADDED

Git LFS Details

  • SHA256: d9927dc65d363146857587758040fd23578e32308e9a3fc64c8de9ff69140c39
  • Pointer size: 132 Bytes
  • Size of remote file: 1.63 MB
samples/images/hurricane-florence_00000004_post_disaster.png ADDED

Git LFS Details

  • SHA256: 4d6cee3b51e7856b0d808a48c45a6bba64eb5ba3a24f9919e0a0bf03a57f01e0
  • Pointer size: 132 Bytes
  • Size of remote file: 1.09 MB
samples/images/santa-rosa-wildfire_00000000_post_disaster.png ADDED

Git LFS Details

  • SHA256: 098a1985570bb48c589ea902b181f02bb76cddd551ef6aae894a2b8a50863734
  • Pointer size: 132 Bytes
  • Size of remote file: 1.67 MB
verify_dataset.py ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Verify xView2 dataset integrity
4
+ Checks that all files referenced in JSON metadata exist
5
+ """
6
+
7
+ import json
8
+ from pathlib import Path
9
+ from tqdm import tqdm
10
+ from collections import defaultdict
11
+ from typing import Dict, List, Tuple
12
+
13
+
14
+ def verify_dataset_split(
15
+ json_file: Path,
16
+ base_dir: Path,
17
+ split_name: str,
18
+ verbose: bool = False
19
+ ) -> Tuple[bool, Dict]:
20
+ """
21
+ Verify a single dataset split
22
+
23
+ Args:
24
+ json_file: Path to JSON metadata file
25
+ base_dir: Base directory containing the dataset
26
+ split_name: Name of the split (train/test)
27
+ verbose: Print detailed statistics
28
+
29
+ Returns:
30
+ Tuple of (all_valid, statistics)
31
+ """
32
+ if not json_file.exists():
33
+ if verbose:
34
+ print(f"❌ JSON file not found: {json_file}")
35
+ return False, {}
36
+
37
+ # Load JSON
38
+ with open(json_file, 'r', encoding='utf-8') as f:
39
+ data = json.load(f)
40
+
41
+ # Statistics
42
+ stats = {
43
+ 'total_entries': len(data),
44
+ 'missing_files': [],
45
+ 'disaster_types': defaultdict(int),
46
+ 'valid_entries': 0,
47
+ 'invalid_entries': 0
48
+ }
49
+
50
+ # Check each entry
51
+ all_valid = True
52
+
53
+ # Use tqdm only if verbose
54
+ iterator = tqdm(data, desc=f"Checking {split_name}", unit="entry", disable=not verbose) if verbose else data
55
+
56
+ for idx, entry in enumerate(iterator):
57
+ entry_valid = True
58
+
59
+ # Count disaster types
60
+ disaster_type = entry.get('disaster_type', 'unknown')
61
+ stats['disaster_types'][disaster_type] += 1
62
+
63
+ # Check all required fields
64
+ required_fields = [
65
+ 'pre_disaster_image',
66
+ 'post_disaster_image',
67
+ 'pre_disaster_mask',
68
+ 'post_disaster_mask',
69
+ 'disaster',
70
+ 'disaster_type'
71
+ ]
72
+
73
+ for field in required_fields:
74
+ if field not in entry:
75
+ stats['missing_files'].append({
76
+ 'entry_idx': idx,
77
+ 'field': field,
78
+ 'reason': 'Field missing from JSON'
79
+ })
80
+ entry_valid = False
81
+ continue
82
+
83
+ # Check if file exists (for image/mask paths)
84
+ if field.endswith('_image') or field.endswith('_mask') or field.endswith('_colormask'):
85
+ file_path = base_dir / entry[field]
86
+ if not file_path.exists():
87
+ stats['missing_files'].append({
88
+ 'entry_idx': idx,
89
+ 'field': field,
90
+ 'path': str(file_path),
91
+ 'reason': 'File not found'
92
+ })
93
+ entry_valid = False
94
+
95
+ if entry_valid:
96
+ stats['valid_entries'] += 1
97
+ else:
98
+ stats['invalid_entries'] += 1
99
+ all_valid = False
100
+
101
+ # Print errors if any
102
+ if not all_valid and verbose:
103
+ print(f"\n✗ Invalid entries: {stats['invalid_entries']}")
104
+ print(f"✗ Missing files: {len(stats['missing_files'])}")
105
+ if stats['missing_files']:
106
+ print(f"\nFirst 5 missing files:")
107
+ for missing in stats['missing_files'][:5]:
108
+ print(f" - Entry {missing['entry_idx']}: {missing['field']} - {missing['reason']}")
109
+ if 'path' in missing:
110
+ print(f" Path: {missing['path']}")
111
+
112
+ return all_valid, stats
113
+
114
+
115
+ def verify_sharegpt_format(
116
+ sharegpt_file: Path,
117
+ split_name: str,
118
+ verbose: bool = False
119
+ ) -> Tuple[bool, Dict]:
120
+ """
121
+ Verify ShareGPT format file
122
+
123
+ Args:
124
+ sharegpt_file: Path to ShareGPT JSON file
125
+ split_name: Name of the split
126
+ verbose: Print detailed statistics
127
+
128
+ Returns:
129
+ Tuple of (all_valid, statistics)
130
+ """
131
+ if not sharegpt_file.exists():
132
+ if verbose:
133
+ print(f"❌ ShareGPT file not found: {sharegpt_file}")
134
+ return False, {}
135
+
136
+ # Load JSON
137
+ with open(sharegpt_file, 'r', encoding='utf-8') as f:
138
+ conversations = json.load(f)
139
+
140
+ stats = {
141
+ 'total_conversations': len(conversations),
142
+ 'valid_conversations': 0,
143
+ 'invalid_conversations': 0,
144
+ 'languages': defaultdict(int),
145
+ 'image_types': defaultdict(int),
146
+ 'issues': []
147
+ }
148
+
149
+ all_valid = True
150
+
151
+ # Use tqdm only if verbose
152
+ iterator = tqdm(conversations, desc=f"Checking ShareGPT {split_name}", unit="conv", disable=not verbose) if verbose else conversations
153
+
154
+ for idx, conv in enumerate(iterator):
155
+ conv_valid = True
156
+
157
+ # Check required fields
158
+ if 'id' not in conv:
159
+ stats['issues'].append(f"Entry {idx}: Missing 'id' field")
160
+ conv_valid = False
161
+ else:
162
+ # Extract language and type from ID
163
+ parts = conv['id'].split('_')
164
+ if len(parts) >= 4:
165
+ img_type = parts[-2] # pre or post
166
+ lang = parts[-1] # en, zh, ja
167
+ stats['languages'][lang] += 1
168
+ stats['image_types'][img_type] += 1
169
+
170
+ if 'images' not in conv or not conv['images']:
171
+ stats['issues'].append(f"Entry {idx}: Missing or empty 'images' field")
172
+ conv_valid = False
173
+
174
+ if 'messages' not in conv or len(conv['messages']) != 4:
175
+ stats['issues'].append(f"Entry {idx}: Expected 4 messages, got {len(conv.get('messages', []))}")
176
+ conv_valid = False
177
+ else:
178
+ # Check message structure
179
+ messages = conv['messages']
180
+ expected_pattern = ['human', 'gpt', 'human', 'gpt']
181
+ actual_pattern = [m.get('from', '') for m in messages]
182
+
183
+ if actual_pattern != expected_pattern:
184
+ stats['issues'].append(f"Entry {idx}: Unexpected message pattern {actual_pattern}")
185
+ conv_valid = False
186
+
187
+ # Check first message has <image> tag
188
+ if messages[0].get('value', '').find('<image>') == -1:
189
+ stats['issues'].append(f"Entry {idx}: First message missing <image> tag")
190
+ conv_valid = False
191
+
192
+ if conv_valid:
193
+ stats['valid_conversations'] += 1
194
+ else:
195
+ stats['invalid_conversations'] += 1
196
+ all_valid = False
197
+
198
+ # Print errors if any
199
+ if not all_valid and verbose:
200
+ print(f"\n✗ Invalid conversations: {stats['invalid_conversations']}")
201
+ print(f"\nFirst 5 issues:")
202
+ for issue in stats['issues'][:5]:
203
+ print(f" - {issue}")
204
+
205
+ return all_valid, stats
206
+
207
+
208
+ def main():
209
+ """Main verification function"""
210
+
211
+ import sys
212
+
213
+ # Check for verbose flag
214
+ verbose = '--verbose' in sys.argv or '-v' in sys.argv
215
+
216
+ if not verbose:
217
+ print("Verifying dataset integrity...", end=" ", flush=True)
218
+
219
+ # Define paths
220
+ data_root = Path("/home/ohya-bob/Documents/DisasterSynth/data/xview2")
221
+
222
+ # Verify original metadata files
223
+ train_json = data_root / "xview2_train_tier3.json"
224
+ test_json = data_root / "xview2_test.json"
225
+
226
+ train_valid, train_stats = verify_dataset_split(train_json, data_root, "train", verbose=verbose)
227
+ test_valid, test_stats = verify_dataset_split(test_json, data_root, "test", verbose=verbose)
228
+
229
+ # Verify ShareGPT format files
230
+ train_sharegpt = data_root / "xview2_train_tier3_sharegpt.json"
231
+ test_sharegpt = data_root / "xview2_test_sharegpt.json"
232
+
233
+ train_sharegpt_valid, train_sharegpt_stats = verify_sharegpt_format(train_sharegpt, "train", verbose=verbose)
234
+ test_sharegpt_valid, test_sharegpt_stats = verify_sharegpt_format(test_sharegpt, "test", verbose=verbose)
235
+
236
+ # Overall summary
237
+ all_checks_passed = train_valid and test_valid and train_sharegpt_valid and test_sharegpt_valid
238
+
239
+ if not verbose:
240
+ print("") # New line after "Verifying..."
241
+
242
+ if all_checks_passed:
243
+ print("✅ Dataset is ready")
244
+ else:
245
+ print("❌ Dataset verification failed")
246
+ print("\nIssues found:")
247
+ if not train_valid:
248
+ print(" - Training metadata has issues")
249
+ if not test_valid:
250
+ print(" - Test metadata has issues")
251
+ if not train_sharegpt_valid:
252
+ print(" - Training ShareGPT format has issues")
253
+ if not test_sharegpt_valid:
254
+ print(" - Test ShareGPT format has issues")
255
+ print("\nRun with --verbose flag for detailed information")
256
+
257
+
258
+ if __name__ == "__main__":
259
+ main()
260
+
xview2_test.json ADDED
The diff for this file is too large to render. See raw diff
 
xview2_test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08f0665055c516fd93459096c633e404a18ab370ee97407ed550be60b4383e52
3
+ size 2803334360
xview2_test_sharegpt.json ADDED
The diff for this file is too large to render. See raw diff
 
xview2_tier3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44d11892c206cfc7ac2e858a65a55d3d91a358919651d5872cad7ccd352c4b75
3
+ size 18649392463
xview2_train.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e56c650cb50c2982dd1223bad4d43900abaed60656903c3e5deff1869f2e807
3
+ size 8431624516
xview2_train_tier3.json ADDED
The diff for this file is too large to render. See raw diff
 
xview2_train_tier3_sharegpt.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:647c385ff6182ca15d52e130b09d9c0aa90a1d20dee7953c16adb6a88079a95b
3
+ size 36346896