jacoblin commited on
Commit
768b9af
·
verified ·
1 Parent(s): 1ba5ecd

Add files using upload-large-folder tool

Browse files
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cloud4D Dataset
2
+ Dataset for "Cloud4D: Estimating Cloud Properties at a High Spatial and Temporal Resolution"
3
+ Project Page: https://cloud4d.jacob-lin.com/
4
+
5
+ ## Dataset Structure
6
+
7
+ The dataset is organized into two main categories:
8
+
9
+ ```
10
+ Cloud4D/
11
+ ├── real_world/ # Real-world stereo camera captures
12
+ │ ├── 20230705_10/ # Date-hour folders (YYYYMMDD_HH)
13
+ │ │ ├── perspective_1/
14
+ │ │ │ ├── left_images/ # Left camera images
15
+ │ │ │ ├── right_images/ # Right camera images
16
+ │ │ │ ├── camera_pair.npz # Stereo calibration data
17
+ │ │ │ └── *.npy # Camera intrinsics/extrinsics
18
+ │ │ ├── perspective_2/
19
+ │ │ └── perspective_3/
20
+ │ └── ...
21
+ └── synthetic/ # Synthetic cloud renders
22
+ ├── terragen/ # Terragen renders
23
+ │ ├── perspective_1/
24
+ │ ├── perspective_2/
25
+ │ └── perspective_3/
26
+ └── large_eddy_simulations/ # LES-based renders
27
+ ├── perspective_1/
28
+ ├── perspective_2/
29
+ ├── perspective_3/
30
+ └── volumes/
31
+ ```
32
+
33
+ ## Quick Start
34
+
35
+ ### Download and Extract
36
+
37
+ ```bash
38
+ # Clone the dataset
39
+ git clone https://huggingface.co/datasets/jacoblin/Cloud4D
40
+ cd Cloud4D
41
+
42
+ # Extract all archives
43
+ python unpack.py
44
+
45
+ # Or extract to a specific location
46
+ python unpack.py --output /path/to/Cloud4D
47
+ ```
48
+
49
+ ### Selective Extraction
50
+
51
+ ```python
52
+ # Extract only real-world data
53
+ python unpack.py --subset real_world
54
+
55
+ # Extract only synthetic data
56
+ python unpack.py --subset synthetic
57
+
58
+ # Extract a specific date
59
+ python unpack.py --filter 20230705
60
+
61
+ # List available archives without extracting
62
+ python unpack.py --list
63
+ ```
64
+
65
+ ### Parallel Extraction (Faster)
66
+
67
+ ```bash
68
+ # Use 4 parallel workers
69
+ python unpack.py --jobs 4
70
+ ```
71
+
72
+ ## Citation
73
+
74
+ If you use this dataset in your research, please cite:
75
+
76
+ ```bibtex
77
+ @inproceedings{
78
+ lin2025cloudd,
79
+ title={Cloud4D: Estimating Cloud Properties at a High Spatial and Temporal Resolution},
80
+ author={Jacob Lin and Edward Gryspeerdt and Ronald Clark},
81
+ booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
82
+ year={2025},
83
+ url={https://openreview.net/forum?id=g2AAvmBwkS}
84
+ ```
real_world/20230705_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8e5cea8bd8a34d7aacf29ca84a7d4906bd65e16e91b3f1e3cf174a34aac8858
3
+ size 1161118885
real_world/20230705_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56e5946d201708eb20d5c2ae8718bc3df172553f49c36542322ebf3fa0716ac8
3
+ size 1203617465
real_world/20230717_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecfb3e9296140e200449bc7e0e2367065c2a623ba01bb712b498d7eb1adec75d
3
+ size 1295947783
real_world/20230717_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d047c515131ef530b24b4f84862783361b2ee1d6d4f0827f4549981d068e6436
3
+ size 1224286008
real_world/20230720_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fddf681e62e53cb9bcf07e9acd0df47190aaa2ad07269b7c3cfb275489d56673
3
+ size 1211943652
real_world/20230723_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92262c4c50dc39cd33c3c529caa755d2dccb0c52e2aeaff725efb4040cfbc930
3
+ size 758659974
real_world/20230725_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8950d99a35e88df10068f263f729214069e40a1b231a7539ea780f694663a724
3
+ size 1062260243
real_world/20230806_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04109e242e497f7feb4bac560bbd733788f16943a96e5484101623fe9f958199
3
+ size 1136773444
real_world/20230807_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22ddaf0c9134ade34017de097166b6a564e35c330454feb9af7a856612516dbe
3
+ size 1050388450
real_world/20230807_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2ec41228e42254cf6b11f6a1a36b5086214740453ff720bf90a8861bc6c4ab2
3
+ size 1002149566
real_world/20230815_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b7306053c3570ce6c2f4ce6ccb83f6b8aa8610f2d2f32c3f1eb8befd7622c3c
3
+ size 1201651863
real_world/20230821_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cc096e7a53f833e0fb47c5e3e72bbc58fdcf3d75b99c2e6e24716e0baa40127
3
+ size 1000207557
real_world/20230825_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b8bb0c767bf5250f01f8260c3a6ed8f18a16ab16a43b7da5c7bb836709a9c12
3
+ size 1199614612
real_world/20230911_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fd6bdb2ec6bacb4b15ffb0a06b30f992b1b00667a0e7dfa19f6c0fc226353f1
3
+ size 883999594
real_world/20230923_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc99bdbc5c797a29fbcf69687fc041cde521c51a95988d5ba06a60126832a4c4
3
+ size 749512676
real_world/20230923_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ffc11b87098b982c8455a88e9ae47d413091610e195762f93c5d0c6d06a662a
3
+ size 1138697915
real_world/20230923_12.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71f541699ed45db6cfd85087de9f2e1e36147ab2dc7d3605fd3462f2b1870194
3
+ size 1032600472
synthetic/large_eddy_simulations.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e205dc463d49f32a2b2b9d9ea98e4f8c5499e4b1c7269cd370aaed0afc820a51
3
+ size 33098775283
synthetic/terragen.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5343143787fd60933cc5b6add614d6d6a73af5d74fd764974f9b62c6043965b
3
+ size 1641841189
unpack.py ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Unpack Cloud4D dataset archives.
4
+
5
+ This script extracts all tar.gz archives to reconstruct the original
6
+ Cloud4D directory structure.
7
+
8
+ Usage:
9
+ python unpack.py [--output /path/to/output] [--subset real_world|synthetic] [--jobs N]
10
+
11
+ Examples:
12
+ # Extract everything to ./Cloud4D
13
+ python unpack.py
14
+
15
+ # Extract to a specific location
16
+ python unpack.py --output /data/Cloud4D
17
+
18
+ # Extract only real_world data
19
+ python unpack.py --subset real_world
20
+
21
+ # Extract only synthetic data
22
+ python unpack.py --subset synthetic
23
+
24
+ # Extract a specific date-hour (real_world)
25
+ python unpack.py --filter 20230705_10
26
+
27
+ # Use parallel extraction
28
+ python unpack.py --jobs 4
29
+ """
30
+
31
+ import argparse
32
+ import subprocess
33
+ import sys
34
+ import tarfile
35
+ from pathlib import Path
36
+ from concurrent.futures import ProcessPoolExecutor, as_completed
37
+
38
+
39
+ def extract_archive(archive_path, output_dir):
40
+ """Extract a tar.gz archive to the output directory."""
41
+ archive_path = Path(archive_path)
42
+ output_dir = Path(output_dir)
43
+
44
+ # Try using pigz for parallel decompression (faster)
45
+ try:
46
+ subprocess.run(['which', 'pigz'], check=True, capture_output=True)
47
+ cmd = f'pigz -dc "{archive_path}" | tar -xf - -C "{output_dir}"'
48
+ subprocess.run(cmd, shell=True, check=True)
49
+ return True
50
+ except (subprocess.CalledProcessError, FileNotFoundError):
51
+ pass
52
+
53
+ # Fallback to regular tar
54
+ try:
55
+ cmd = ['tar', '-xzf', str(archive_path), '-C', str(output_dir)]
56
+ subprocess.run(cmd, check=True)
57
+ return True
58
+ except subprocess.CalledProcessError:
59
+ pass
60
+
61
+ # Final fallback to Python tarfile
62
+ try:
63
+ with tarfile.open(archive_path, 'r:gz') as tar:
64
+ tar.extractall(output_dir)
65
+ return True
66
+ except Exception as e:
67
+ print(f"Error extracting {archive_path}: {e}")
68
+ return False
69
+
70
+
71
+ def extract_single(args):
72
+ """Worker function for parallel extraction."""
73
+ archive_path, output_dir, name = args
74
+ try:
75
+ success = extract_archive(archive_path, output_dir)
76
+ if success:
77
+ return (name, 'extracted')
78
+ else:
79
+ return (name, 'failed')
80
+ except Exception as e:
81
+ return (name, f'error: {e}')
82
+
83
+
84
+ def main():
85
+ parser = argparse.ArgumentParser(
86
+ description='Unpack Cloud4D dataset archives',
87
+ formatter_class=argparse.RawDescriptionHelpFormatter,
88
+ epilog="""
89
+ Examples:
90
+ python unpack.py # Extract all to ./Cloud4D
91
+ python unpack.py --output /data/Cloud4D # Extract to specific location
92
+ python unpack.py --subset real_world # Extract only real_world
93
+ python unpack.py --filter 20230705 # Extract matching archives
94
+ python unpack.py --jobs 4 # Parallel extraction
95
+ """
96
+ )
97
+ parser.add_argument('--output', '-o', type=Path, default=Path('./Cloud4D'),
98
+ help='Output directory (default: ./Cloud4D)')
99
+ parser.add_argument('--subset', choices=['real_world', 'synthetic'],
100
+ help='Extract only a specific subset')
101
+ parser.add_argument('--filter', type=str,
102
+ help='Filter archives by name (e.g., "20230705" for a specific date)')
103
+ parser.add_argument('--jobs', '-j', type=int, default=1,
104
+ help='Number of parallel extraction jobs (default: 1)')
105
+ parser.add_argument('--list', '-l', action='store_true',
106
+ help='List available archives without extracting')
107
+ args = parser.parse_args()
108
+
109
+ # Find the script directory (where archives are located)
110
+ script_dir = Path(__file__).parent.resolve()
111
+
112
+ # Collect archives
113
+ archives = []
114
+
115
+ # Real world archives
116
+ real_world_dir = script_dir / 'real_world'
117
+ if real_world_dir.exists() and args.subset in (None, 'real_world'):
118
+ for archive in sorted(real_world_dir.glob('*.tar.gz')):
119
+ if args.filter is None or args.filter in archive.name:
120
+ archives.append(('real_world', archive))
121
+
122
+ # Synthetic archives
123
+ synthetic_dir = script_dir / 'synthetic'
124
+ if synthetic_dir.exists() and args.subset in (None, 'synthetic'):
125
+ for archive in sorted(synthetic_dir.glob('*.tar.gz')):
126
+ if args.filter is None or args.filter in archive.name:
127
+ archives.append(('synthetic', archive))
128
+
129
+ if not archives:
130
+ print("No archives found matching the criteria.")
131
+ print(f"Searched in: {script_dir}")
132
+ sys.exit(1)
133
+
134
+ # List mode
135
+ if args.list:
136
+ print("Available archives:")
137
+ print()
138
+ current_subset = None
139
+ for subset, archive in archives:
140
+ if subset != current_subset:
141
+ print(f" {subset}/")
142
+ current_subset = subset
143
+ size_mb = archive.stat().st_size / 1024 / 1024
144
+ print(f" {archive.name} ({size_mb:.1f} MB)")
145
+ print()
146
+ total_size = sum(a.stat().st_size for _, a in archives) / 1024 / 1024 / 1024
147
+ print(f"Total: {len(archives)} archives, {total_size:.2f} GB")
148
+ return
149
+
150
+ # Extract mode
151
+ output_dir = args.output.resolve()
152
+
153
+ print("=" * 70)
154
+ print("Cloud4D Dataset Unpacker")
155
+ print("=" * 70)
156
+ print(f"Output directory: {output_dir}")
157
+ print(f"Archives to extract: {len(archives)}")
158
+ if args.subset:
159
+ print(f"Subset: {args.subset}")
160
+ if args.filter:
161
+ print(f"Filter: {args.filter}")
162
+ print()
163
+
164
+ # Create output structure
165
+ (output_dir / 'real_world').mkdir(parents=True, exist_ok=True)
166
+ (output_dir / 'synthetic').mkdir(parents=True, exist_ok=True)
167
+
168
+ # Prepare extraction tasks
169
+ tasks = []
170
+ for subset, archive in archives:
171
+ target_dir = output_dir / subset
172
+ name = f"{subset}/{archive.stem}"
173
+ tasks.append((archive, target_dir, name))
174
+
175
+ # Extract
176
+ print("Extracting archives...")
177
+ results = []
178
+
179
+ if args.jobs > 1:
180
+ with ProcessPoolExecutor(max_workers=args.jobs) as executor:
181
+ futures = {executor.submit(extract_single, task): task[2] for task in tasks}
182
+ for future in as_completed(futures):
183
+ name, status = future.result()
184
+ results.append((name, status))
185
+ print(f" [{status.upper()}] {name}")
186
+ else:
187
+ for task in tasks:
188
+ name, status = extract_single(task)
189
+ results.append((name, status))
190
+ print(f" [{status.upper()}] {name}")
191
+
192
+ # Summary
193
+ print()
194
+ print("=" * 70)
195
+ print("EXTRACTION COMPLETE")
196
+ print("=" * 70)
197
+
198
+ extracted = sum(1 for _, s in results if s == 'extracted')
199
+ failed = sum(1 for _, s in results if s != 'extracted')
200
+
201
+ print(f"Successfully extracted: {extracted}")
202
+ if failed:
203
+ print(f"Failed: {failed}")
204
+ print()
205
+ print(f"Dataset extracted to: {output_dir}")
206
+ print()
207
+ print("Directory structure:")
208
+ print(f" {output_dir}/")
209
+ print(f" real_world/")
210
+ print(f" 20230705_10/")
211
+ print(f" perspective_1/")
212
+ print(f" perspective_2/")
213
+ print(f" perspective_3/")
214
+ print(f" ... (more date-hour folders)")
215
+ print(f" synthetic/")
216
+ print(f" terragen/")
217
+ print(f" large_eddy_simulations/")
218
+
219
+
220
+ if __name__ == '__main__':
221
+ main()