weikaih commited on
Commit
8e139d6
·
verified ·
1 Parent(s): bdb5f39

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -42
README.md CHANGED
@@ -21,12 +21,11 @@ WildDet3D-Data consists of 3D bounding box annotations for in-the-wild images fr
21
 
22
  | Split | Description | Annotation Source | Images | Annotations | Categories |
23
  |-------|-------------|-------------------|--------|-------------|------------|
24
- | **Val** | Validation set | Human | 2,470 | 9,256 | 785 |
25
- | **Test** | Test set | Human | 2,433 | 5,596 | 633 |
26
  | **Train (Human)** | Human-reviewed annotations only | Human | 102,979 | 229,934 | 11,879 |
27
  | **Train (Essential)** | Human + VLM-qualified small objects | Human + VLM | 102,979 | 412,711 | 12,064 |
28
  | **Train (Synthetic)** | VLM auto-selected annotations | VLM | 896,004 | 3,483,292 | 11,896 |
29
- | **Total** | | | 1,003,886 | 3,910,855 | 13,499 |
 
30
 
31
  ## Directory Structure
32
 
@@ -36,8 +35,6 @@ After downloading and extracting, the dataset should be organized as:
36
  WildDet3D-Data/
37
  ├── README.md
38
  ├── annotations/
39
- │ ├── InTheWild_v3_val.json # Val
40
- │ ├── InTheWild_v3_test.json # Test
41
  │ ├── InTheWild_v3_train_human_only.json # Train (Human) — COCO, LVIS, Obj365
42
  │ ├── InTheWild_v3_train_human.json # Train (Essential) — COCO, LVIS, Obj365
43
  │ ├── InTheWild_v3_train_synthetic.json # Train (Synthetic) — COCO, LVIS, Obj365
@@ -50,9 +47,7 @@ WildDet3D-Data/
50
  ├── camera/{split}/ # Camera parameters (extract from .tar.gz)
51
  │ └── {source}_{formatted_id}.json # Camera intrinsics (K)
52
  └── images/ # Downloaded separately (see Step 2)
53
- ├── coco_val/
54
  ├── coco_train/
55
- ├── obj365_val/
56
  ├── obj365_train/
57
  └── v3det_train/
58
  ```
@@ -76,13 +71,14 @@ img = data["images"][0]
76
  source = img["file_path"].split("/")[1] # e.g., "coco_train"
77
  fid = img["formatted_id"] # e.g., "000000262686"
78
 
79
- depth = np.load(f"depth/{split}/{source}_{fid}.npz")["depth"] # float32, (H, W)
 
80
  camera = json.load(open(f"camera/{split}/{source}_{fid}.json"))
81
  ```
82
 
83
  ### Depth Format
84
 
85
- Each `.npz` file contains a single key `"depth"` with a float32 2D array at original image resolution (meters).
86
 
87
  ### Camera Format
88
 
@@ -118,10 +114,7 @@ huggingface-cli download weikaih/WildDet3D-Data --repo-type dataset --local-dir
118
  Depth maps are provided as compressed archives. Large splits are split into multiple parts.
119
 
120
  ```bash
121
- # Val and Test (small, single file each)
122
  mkdir -p depth && cd depth
123
- tar xzf ../packed/depth_val.tar.gz
124
- tar xzf ../packed/depth_test.tar.gz
125
 
126
  # Train Human (2 parts)
127
  tar xzf ../packed/depth_train_human_part000.tar.gz
@@ -157,25 +150,14 @@ Images must be downloaded from their original sources and organized into the fol
157
 
158
  ```
159
  images/
160
- ├── coco_val/ # COCO val2017
161
  ├── coco_train/ # COCO train2017 (includes LVIS images)
162
- ├── obj365_val/ # Objects365 validation
163
  ├── obj365_train/ # Objects365 training
164
  └── v3det_train/ # V3Det training
165
  ```
166
 
167
- ### COCO (val2017 + train2017)
168
-
169
- Used by: Val, Test, Train (all splits)
170
 
171
  ```bash
172
- # COCO val2017 — used by Val and Test
173
- wget http://images.cocodataset.org/zips/val2017.zip
174
- unzip val2017.zip
175
- mkdir -p images/coco_val
176
- mv val2017/* images/coco_val/
177
-
178
- # COCO train2017 — used by Val/Test for LVIS images, and all Train splits
179
  wget http://images.cocodataset.org/zips/train2017.zip
180
  unzip train2017.zip
181
  mkdir -p images/coco_train
@@ -184,13 +166,8 @@ mv train2017/* images/coco_train/
184
 
185
  ### Objects365
186
 
187
- Used by: Val, Test (val split), Train (train split)
188
-
189
  ```bash
190
  # Objects365 — download from https://www.objects365.org/
191
- mkdir -p images/obj365_val
192
- # Images should be named: obj365_val_000000XXXXXX.jpg
193
-
194
  mkdir -p images/obj365_train
195
  # Images should be named: obj365_train_000000XXXXXX.jpg
196
  ```
@@ -206,15 +183,11 @@ mkdir -p images/v3det_train
206
  # e.g., images/v3det_train/Q100507578/28_284_50119550013_7d06ded882_c.jpg
207
  ```
208
 
209
- | Source | Directory | Used By |
210
- |--------|-----------|---------|
211
- | COCO val2017 | `images/coco_val/` | Val, Test |
212
- | COCO train2017 | `images/coco_train/` | Val, Test, Train |
213
- | Objects365 val | `images/obj365_val/` | Val, Test |
214
- | Objects365 train | `images/obj365_train/` | Train |
215
- | V3Det train | `images/v3det_train/` | Train (V3Det) |
216
-
217
- **For evaluation only** (Val + Test): you only need COCO val2017, COCO train2017, and Objects365 val.
218
 
219
  ## Annotation Format (COCO3D)
220
 
@@ -275,7 +248,6 @@ for ann in data["annotations"]:
275
 
276
  | Use Case | Annotation Files |
277
  |----------|-----------------|
278
- | Evaluation | `InTheWild_v3_val.json`, `InTheWild_v3_test.json` |
279
  | Train (Human only) | `InTheWild_v3_train_human_only.json` + `InTheWild_v3_v3det_human_only.json` |
280
  | Train (Essential) | `InTheWild_v3_train_human.json` + `InTheWild_v3_v3det_human.json` |
281
  | Train (Synthetic) | `InTheWild_v3_train_synthetic.json` + `InTheWild_v3_v3det_synthetic.json` |
@@ -283,6 +255,4 @@ for ann in data["annotations"]:
283
 
284
  ## License
285
 
286
- CC BY 4.0
287
-
288
- This dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).
 
21
 
22
  | Split | Description | Annotation Source | Images | Annotations | Categories |
23
  |-------|-------------|-------------------|--------|-------------|------------|
 
 
24
  | **Train (Human)** | Human-reviewed annotations only | Human | 102,979 | 229,934 | 11,879 |
25
  | **Train (Essential)** | Human + VLM-qualified small objects | Human + VLM | 102,979 | 412,711 | 12,064 |
26
  | **Train (Synthetic)** | VLM auto-selected annotations | VLM | 896,004 | 3,483,292 | 11,896 |
27
+
28
+ For val/test benchmarks, see [WildDet3D-Bench](https://huggingface.co/datasets/allenai/WildDet3D-Bench).
29
 
30
  ## Directory Structure
31
 
 
35
  WildDet3D-Data/
36
  ├── README.md
37
  ├── annotations/
 
 
38
  │ ├── InTheWild_v3_train_human_only.json # Train (Human) — COCO, LVIS, Obj365
39
  │ ├── InTheWild_v3_train_human.json # Train (Essential) — COCO, LVIS, Obj365
40
  │ ├── InTheWild_v3_train_synthetic.json # Train (Synthetic) — COCO, LVIS, Obj365
 
47
  ├── camera/{split}/ # Camera parameters (extract from .tar.gz)
48
  │ └── {source}_{formatted_id}.json # Camera intrinsics (K)
49
  └── images/ # Downloaded separately (see Step 2)
 
50
  ├── coco_train/
 
51
  ├── obj365_train/
52
  └── v3det_train/
53
  ```
 
71
  source = img["file_path"].split("/")[1] # e.g., "coco_train"
72
  fid = img["formatted_id"] # e.g., "000000262686"
73
 
74
+ depth_mm = np.load(f"depth/{split}/{source}_{fid}.npz")["depth"] # float32, (H, W), in mm
75
+ depth_m = depth_mm / 1000.0 # convert to meters
76
  camera = json.load(open(f"camera/{split}/{source}_{fid}.json"))
77
  ```
78
 
79
  ### Depth Format
80
 
81
+ Each `.npz` file contains a single key `"depth"` with a float32 2D array at original image resolution. **Values are in millimeters (mm).** To convert to meters: `depth_m = depth_mm / 1000.0`.
82
 
83
  ### Camera Format
84
 
 
114
  Depth maps are provided as compressed archives. Large splits are split into multiple parts.
115
 
116
  ```bash
 
117
  mkdir -p depth && cd depth
 
 
118
 
119
  # Train Human (2 parts)
120
  tar xzf ../packed/depth_train_human_part000.tar.gz
 
150
 
151
  ```
152
  images/
 
153
  ├── coco_train/ # COCO train2017 (includes LVIS images)
 
154
  ├── obj365_train/ # Objects365 training
155
  └── v3det_train/ # V3Det training
156
  ```
157
 
158
+ ### COCO train2017
 
 
159
 
160
  ```bash
 
 
 
 
 
 
 
161
  wget http://images.cocodataset.org/zips/train2017.zip
162
  unzip train2017.zip
163
  mkdir -p images/coco_train
 
166
 
167
  ### Objects365
168
 
 
 
169
  ```bash
170
  # Objects365 — download from https://www.objects365.org/
 
 
 
171
  mkdir -p images/obj365_train
172
  # Images should be named: obj365_train_000000XXXXXX.jpg
173
  ```
 
183
  # e.g., images/v3det_train/Q100507578/28_284_50119550013_7d06ded882_c.jpg
184
  ```
185
 
186
+ | Source | Directory |
187
+ |--------|-----------|
188
+ | COCO train2017 | `images/coco_train/` |
189
+ | Objects365 train | `images/obj365_train/` |
190
+ | V3Det train | `images/v3det_train/` |
 
 
 
 
191
 
192
  ## Annotation Format (COCO3D)
193
 
 
248
 
249
  | Use Case | Annotation Files |
250
  |----------|-----------------|
 
251
  | Train (Human only) | `InTheWild_v3_train_human_only.json` + `InTheWild_v3_v3det_human_only.json` |
252
  | Train (Essential) | `InTheWild_v3_train_human.json` + `InTheWild_v3_v3det_human.json` |
253
  | Train (Synthetic) | `InTheWild_v3_train_synthetic.json` + `InTheWild_v3_v3det_synthetic.json` |
 
255
 
256
  ## License
257
 
258
+ - **Annotations**: CC BY 4.0