File size: 10,675 Bytes
c5ada2a 90bf4d9 e8a9799 90bf4d9 8d42820 90bf4d9 e8a9799 90bf4d9 8d42820 90bf4d9 8d42820 90bf4d9 8d42820 90bf4d9 9385c2e 90bf4d9 8d42820 90bf4d9 b6de012 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 | ---
license: apache-2.0
task_categories:
- image-to-3d
- object-detection
- mask-generation
- image-segmentation
- depth-estimation
- keypoint-detection
tags:
- pose-estimation
- synthetic-data
- computer-vision
- unreal-engine
- coco-keypoints
- instance-segmentation
- depth-maps
size_categories:
- 1M<n<10M
pretty_name: CURDIE3
---
# Dataset Card for CURDIE3
## Dataset Details
### Dataset Description
**CURDIE3** (Crafted Unreal Renders of Detailed Individuals in 3 Environments) is a synthetic pose estimation dataset featuring photorealistic RGB images, metric depth maps, instance segmentation masks, and multi-format keypoint annotations rendered in Unreal Engine.
The dataset contains **1,408,410 images** with ground truth labels rendered at **30 fps**, featuring 100 distinct actor models across 1,400 distinct animations in three environment maps: `field`, `house`, and `urban`. Each sample provides comprehensive annotations including COCO-17 body keypoints, MediaPipe hand keypoints, foot keypoints, camera intrinsics/extrinsics, and per-actor instance segmentation masks in COCO RLE format.



- **Curated by:** Brendan Wilson, Astrid Wilde, Chris Johnson, Michael Listo, Audrey Ito, Anna Shive, Sherry Wallace
- **License:** Apache 2.0
### Dataset Sources
- **Paper:** Wilson et al., "CURDIE3: Crafted Unreal Renders of Detailed Individuals in 3 Environments" (2026)
## Uses
### Direct Use
CURDIE3 is designed for training and evaluating models on the following tasks:
- 2D/3D human pose estimation
- Instance segmentation
- Multi-person detection and tracking
- Hand pose estimation
- Keypoint detection and localization
### Out-of-Scope Use
Due to known limitations, CURDIE3 is **not suitable** for:
- **Depth estimation from RGB:** Ground-truth depth is ambiguous at mesh intersection points
- **Occlusion reasoning and amodal completion:** Intersection boundaries are physically impossible
- **Physics-based pose estimation or motion prediction:** Some configurations are non-physical
- **Contact and interaction detection:** Difficult to distinguish intentional contact from erroneous intersection
- **Sim-to-real transfer:** Where physical plausibility is required
- **Facial expression recognition or emotion detection:** All actors maintain neutral expressions
- **Lip-sync or speech-related tasks:** No facial animations present
- **Social interaction modeling:** No coordinated interactions between subjects
- **Group activity recognition:** Actors are animated independently
- **Ground contact estimation or footskate detection:** No foot-ground contact labels; feet may penetrate floors or float
## Dataset Structure
| Attribute | Value |
|-----------|-------|
| Total Images | 1,408,410 |
| Frame Rate | 30 fps |
| Actor Models | 100 |
| Animations | 1,400 |
| Environments | 3 (`field`, `house`, `urban`) |
### Camera Movement Types
The dataset contains four types of camera movements:
| Movement Type | Description | Tag |
|---------------|-------------|-----|
| **Stationary** | Fixed camera position and orientation | `free` or `orbit` |
| **Dolly** | Linear translation in a constant direction for the duration of the animation | `free` |
| **Arc** | Linear translation combined with constant rotation for the duration of the animation | `free` |
| **Orbit** | Camera pivots around a specified center point in 3D space | `orbit` |
**Tag Reference:**
- All **dolly** and **arc** movements are tagged as `free`
- All **orbit** movements are tagged as `orbit`
- **Stationary** cameras may be tagged as either `free` or `orbit`
### File Format
The dataset is distributed as split 7z archives with 40GB volumes. Each sample consists of paired files with a globally unique 12-digit identifier:
- `000000000042.png` — RGB image
- `000000000042.npz` — Compressed NumPy archive containing all annotations
**Extraction:**
```bash
7z x curdie3.7z.001 # Automatically processes all volumes
```
**Loading NPZ files:**
```python
import numpy as np
data = np.load(path, allow_pickle=True) # allow_pickle=True required for RLE masks
```
### Coordinate Systems
| Space | Convention |
|-------|------------|
| World 3D | UE4: X=Forward, Y=Right, Z=Up (left-handed) |
| Screen 2D | Origin at top-left, X=right, Y=down (pixels) |
| Rotation | Degrees: Pitch (X), Yaw (Z), Roll (Y) |
### NPZ Field Reference
#### Frame Metadata
- `frame_id` (str): Original frame identifier
- `dataset` (str): Source dataset name
- `image_shape` (uint16, shape 3): [height, width, channels]
#### Depth
- `depth_m` (float16, shape H×W): Per-pixel depth in meters (range ~0–30m)
#### Camera Intrinsics & Extrinsics
- `K` (float32, shape 3×3): Camera intrinsic matrix
- `camera_location` (float16, shape 3): World-space position in meters
- `camera_rotation` (float16, shape 3): Orientation [Pitch, Yaw, Roll] in degrees
- `camera_direction` (float16, shape 3): Unit forward vector
#### Sensor Settings
- `sensor_width_mm`, `sensor_height_mm` (float16): Physical sensor dimensions
- `focal_length_mm` (float16): Lens focal length
- `hfov_deg` (float16): Horizontal field of view
- `exposure_compensation` (float16): Exposure compensation stops
#### Scene Metadata
- `selected_map` (str): Environment name (`field`, `house`, or `urban`)
- `time_of_day` (str): In-scene time as "HH:MM:SS"
- `capture_mode` (str): Camera movement tag (`free` or `orbit`)
#### Instance Segmentation
- `actor_mask_rles` (object, shape N): Array of COCO RLE bytes, one per actor
- `actor_names` (object, shape N): Actor identifiers
- `num_actors` (int): Number of actors in frame
#### Per-Actor Annotations
- `actor_world_locations` (float16, shape N×3): World position in meters
- `actor_world_rotations` (float16, shape N×3): Orientation in degrees
- `actor_bbox_xywh` (int32, shape N×4): COCO-format bounding boxes
#### Keypoints
- `keypoints_body_3d` (float16, shape N×17×4): COCO-17 body keypoints, world space
- `keypoints_body_2d` (float16, shape N×17×3): COCO-17 body keypoints, image space
- `keypoints_foot_3d` / `keypoints_foot_2d`: 6 foot keypoints per actor
- `keypoints_lefthand_3d` / `keypoints_lefthand_2d`: 21 MediaPipe hand keypoints
- `keypoints_righthand_3d` / `keypoints_righthand_2d`: 21 MediaPipe hand keypoints
### Keypoint Definitions
**COCO-17 Body Keypoints:**
```
0: nose, 1: left_eye, 2: right_eye, 3: left_ear, 4: right_ear,
5: left_shoulder, 6: right_shoulder, 7: left_elbow, 8: right_elbow,
9: left_wrist, 10: right_wrist, 11: left_hip, 12: right_hip,
13: left_knee, 14: right_knee, 15: left_ankle, 16: right_ankle
```
**MediaPipe Hand (21 points):**
```
0: wrist, 1-4: thumb, 5-8: index, 9-12: middle, 13-16: ring, 17-20: pinky
```
## Dataset Creation
### Curation Rationale
CURDIE3 was created to provide a large-scale synthetic dataset for human pose estimation research, offering pixel-perfect ground truth annotations that are difficult or impossible to obtain from real-world data collection.
### Source Data
#### Data Collection and Processing
All data was rendered in Unreal Engine using 100 distinct actor models and 1,400 animations across three environment maps. Camera positions include stationary, dolly, arc, and orbit capture modes with varying focal lengths and exposure settings.
#### Who are the source data producers?
The dataset was synthetically generated; no real human subjects were involved.
### Annotations
#### Annotation process
All annotations are automatically generated during the rendering process, providing pixel-perfect ground truth without manual labeling.
#### Personal and Sensitive Information
This dataset contains only synthetic data with no real human subjects. No personal or sensitive information is present.
## Bias, Risks, and Limitations
### Known Limitations
1. **Mesh Intersections:** Actor meshes may intersect with each other and environment geometry throughout the dataset.
2. **Neutral Facial Expressions:** All actors maintain neutral resting faces—no facial expressions are present.
3. **No Social Interactions:** Actors are animated independently with no coordinated interactions (similar to BEDLAM).
4. **No Ground Contact Labels:** Foot-ground contact is not annotated; feet may penetrate floors or float above ground.
5. **Limited Camera Movement Variety:** Only four camera movement types are present (stationary, dolly, arc, orbit), with all movements maintaining constant velocity and rotation rate throughout each animation.
### Recommendations
Users should carefully consider the known limitations when selecting CURDIE3 for their research. The dataset is best suited for pose estimation tasks that do not require physical plausibility, facial expressions, or social interaction understanding.
## Citation
**BibTeX:**
```bibtex
@inproceedings{wilson2026curdie3,
title={{CURDIE}3: Crafted Unreal Renders of Detailed Individuals in 3 Environments},
author={Brendan Wilson and Astrid Wilde and Chris Johnson and Michael Listo and Audrey Ito and Anna Shive and Sherry Wallace},
year={2026}
}
```
## Technical Requirements
### Dependencies
```
numpy
Pillow
pycocotools
```
### Example Usage
```python
import numpy as np
from pycocotools import mask as mask_utils
from PIL import Image
data = np.load("000000000042.npz", allow_pickle=True)
rgb = np.array(Image.open("000000000042.png"))
# Access depth
depth_m = data['depth_m'].astype(np.float32)
# Access camera intrinsics
K = data['K']
fx, fy = K[0, 0], K[1, 1]
cx, cy = K[0, 2], K[1, 2]
# Check camera movement type
capture_mode = str(data['capture_mode']) # 'free' or 'orbit'
# Iterate over actors
h, w = data['image_shape'][:2]
for i in range(data['num_actors']):
# Decode instance mask
rle = {'size': [int(h), int(w)], 'counts': data['actor_mask_rles'][i]}
mask = mask_utils.decode(rle)
# Get 2D body keypoints
kpts_2d = data['keypoints_body_2d'][i] # (17, 3)
```
## Dataset Card Authors
Brendan Wilson, Astrid Wilde, Chris Johnson, Michael Listo, Audrey Ito, Anna Shive, Sherry Wallace
### Contact Us
If you're interested in similar datasets with greater diversity (more actors, more environments, realistic camera motions, Human On Human Interaction, Human Object Interactions) and richer labels, or substituting humans with robots,
[Contact Us](mailto:astrid@kikitora.com) |