File size: 9,850 Bytes
d56f6ae
 
 
3797537
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
---

license: cc-by-4.0
---


# Neural 3D Video Dataset - Processed

This directory contains preprocessed multi-view video data from the Neural 3D Video dataset, converted into a format suitable for 4D reconstruction and novel view synthesis tasks.

## Dataset Overview

**Source Dataset**: Neural 3D Video  
**License**: CC-BY-4.0  
**Processed Scenes**: 5 dynamic cooking scenes captured from multiple camera angles

### Scenes

| Scene | Description | Cameras | Frames |
|-------|-------------|---------|--------|
| `coffee_martini` | Making a coffee martini cocktail | 18 | 32 |
| `cook_spinach` | Cooking spinach in a pan | 18 | 32 |
| `cut_roasted_beef` | Cutting roasted beef | 18 | 32 |
| `flame_salmon_1` | Flambé salmon preparation | 18 | 32 |
| `sear_steak` | Searing steak in a pan | 18 | 32 |

## Directory Structure

```

Neural-3D-Video-Dataset/

├── README.md (this file)

├── coffee_martini_processed/

│   ├── 256/                          # 256×256 resolution

│   │   ├── images/                   # 32 frame images

│   │   │   ├── sample_000_cam00.jpg

│   │   │   ├── sample_001_cam01.jpg

│   │   │   └── ...

│   │   ├── transforms.json           # Camera poses (JSON format)

│   │   ├── transforms.npz            # Camera poses (NumPy format)

│   │   └── camera_visualization.html # Interactive 3D camera viewer

│   └── 512/                          # 512×512 resolution

│       ├── images/

│       ├── transforms.json

│       ├── transforms.npz

│       └── camera_visualization.html

├── cook_spinach_processed/

│   ├── 256/ ...

│   └── 512/ ...

├── cut_roasted_beef_processed/

│   ├── 256/ ...

│   └── 512/ ...

├── flame_salmon_1_processed/

│   ├── 256/ ...

│   └── 512/ ...

└── sear_steak_processed/

    ├── 256/ ...

    └── 512/ ...

```

## Data Format

### Camera Poses (`transforms.json`)

The camera poses are stored in a JSON file with the following structure:

```json

{

    "frames": [

        {

            "front": {

                "timestamp": 0,

                "file_path": "./images/sample_000_cam00.jpg",

                "w": 256,

                "h": 256,

                "fx": 341.33,

                "fy": 341.33,

                "cx": 128.0,

                "cy": 128.0,

                "w2c": [[...], [...], [...], [...]],     // 4×4 world-to-camera matrix

                "c2w": [[...], [...], [...]],             // 3×4 camera-to-world matrix

                "blender_camera_location": [x, y, z]     // Camera position in world coordinates

            }

        },

        ...

    ]

}

```

**Intrinsics** (camera internal parameters):
- **256×256**: `fx = fy = 341.33`, `cx = cy = 128.0`
- **512×512**: `fx = fy = 682.67`, `cx = cy = 256.0`

**Extrinsics** (camera external parameters):
- `w2c`: 4×4 world-to-camera transformation matrix
- `c2w`: 3×4 camera-to-world transformation matrix (rotation + translation)
- `blender_camera_location`: 3D camera position `[x, y, z]` in world coordinates

### NumPy Format (`transforms.npz`)

For convenience, camera parameters are also provided in NumPy format:

```python

import numpy as np



data = np.load('transforms.npz')

intrinsics = data['intrinsics']          # (32, 3, 3) - intrinsic matrices

extrinsics_w2c = data['extrinsics_w2c']  # (32, 4, 4) - world-to-camera

extrinsics_c2w = data['extrinsics_c2w']  # (32, 4, 4) - camera-to-world

camera_positions = data['camera_positions']  # (32, 3) - camera locations

```

### Frame Images

- **Format**: JPEG
- **Resolutions**: 256×256 and 512×512
- **Count**: 32 frames per scene
- **Naming**: `sample_{frame:03d}_cam{camera:02d}.jpg`

Each frame is extracted from a different camera view:
- Frame 0 → cam00
- Frame 1 → cam01
- ...
- Frame 17 → cam20
- Frame 18 → cam00 (loops back)
- ...
- Frame 31 → cam14

## Data Processing

### Original Data

- **Source resolution**: 2704×2028 (4:3 aspect ratio)
- **Original format**: Multi-view MP4 videos
- **Camera model**: LLFF format with `poses_bounds.npy`

### Processing Pipeline

1. **Center Crop**: 2704×2028 → 2028×2028 (square)
2. **Resize**: 2028×2028 → 256×256 or 512×512
3. **Intrinsics Adjustment**: Focal length and principal point adjusted for crop and resize
4. **Extrinsics Extraction**: Camera poses extracted from LLFF format
5. **Format Conversion**: Converted to standard c2w/w2c matrices

### Frame Sampling Strategy

To capture the dynamic motion from multiple viewpoints, frames are sampled such that each frame shows the scene from a different camera angle in sequence. This creates a "synchronized" multi-view video where:
- The temporal progression shows the dynamic action
- Each frame provides a different spatial viewpoint
- Camera angles loop after exhausting all 18 cameras

## Camera Visualization

Each processed scene includes an interactive 3D camera visualization (`camera_visualization.html`):

- **View camera positions** and orientations in 3D space
- **Interactive**: Rotate, pan, and zoom to explore the camera rig
- **Camera frustums**: Visualize the viewing direction and field of view
- **Trajectory path**: See the sequence of frames and camera transitions
- **Powered by Plotly**: High-quality interactive graphics

Open the HTML file in any web browser to explore the camera setup.

## Usage Examples

### Loading Camera Poses (Python)

```python

import json

import numpy as np



# Load from JSON

with open('coffee_martini_processed/256/transforms.json', 'r') as f:

    data = json.load(f)

    

# Access first frame

frame0 = data['frames'][0]['front']

print(f"Camera intrinsics: fx={frame0['fx']}, fy={frame0['fy']}")

print(f"Camera position: {frame0['blender_camera_location']}")

print(f"Image path: {frame0['file_path']}")



# Load from NumPy

poses = np.load('coffee_martini_processed/256/transforms.npz')

intrinsics = poses['intrinsics']  # (32, 3, 3)

c2w = poses['extrinsics_c2w']     # (32, 4, 4)

```

### Loading Images

```python

import cv2

import os



scene_dir = 'coffee_martini_processed/256'

img_dir = os.path.join(scene_dir, 'images')



# Load all frames

frames = []

for i in range(32):

    img_path = os.path.join(img_dir, f'sample_{i:03d}_cam*.jpg')

    # Find the actual file (camera number may vary)

    import glob

    img_file = glob.glob(img_path)[0]

    img = cv2.imread(img_file)

    frames.append(img)



print(f"Loaded {len(frames)} frames, shape: {frames[0].shape}")

```

### PyTorch Dataset Example

```python

import torch

from torch.utils.data import Dataset

from PIL import Image

import json

import numpy as np



class Neural3DVideoDataset(Dataset):

    def __init__(self, scene_dir):

        self.scene_dir = scene_dir

        

        # Load transforms

        with open(os.path.join(scene_dir, 'transforms.json'), 'r') as f:

            self.data = json.load(f)

        

        self.frames = self.data['frames']

    

    def __len__(self):

        return len(self.frames)

    

    def __getitem__(self, idx):

        frame_data = self.frames[idx]['front']

        

        # Load image

        img_path = os.path.join(self.scene_dir, frame_data['file_path'])

        img = Image.open(img_path).convert('RGB')

        img = torch.from_numpy(np.array(img)).float() / 255.0

        

        # Get camera parameters

        intrinsics = torch.tensor([

            [frame_data['fx'], 0, frame_data['cx']],

            [0, frame_data['fy'], frame_data['cy']],

            [0, 0, 1]

        ], dtype=torch.float32)

        

        c2w = torch.tensor(frame_data['c2w'], dtype=torch.float32)

        

        return {

            'image': img,

            'intrinsics': intrinsics,

            'c2w': c2w,

            'timestamp': frame_data['timestamp']

        }



# Usage

dataset = Neural3DVideoDataset('coffee_martini_processed/256')

sample = dataset[0]

print(f"Image shape: {sample['image'].shape}")

print(f"Camera position: {sample['c2w'][:, 3]}")

```

## Technical Details

### Camera Configuration

- **Number of cameras**: 18 per scene
- **Camera arrangement**: Surrounding the scene in a roughly circular pattern
- **Frame rate**: 30 FPS (original videos)
- **Camera model**: Pinhole camera with radial distortion (pre-undistorted)

### Coordinate System

- **World coordinates**: Right-handed coordinate system
- **Camera coordinates**: 
  - X-axis: Right
  - Y-axis: Down
  - Z-axis: Forward (viewing direction)
- **c2w matrix**: Transforms from camera space to world space
- **w2c matrix**: Transforms from world space to camera space

### Quality Settings

- **JPEG quality**: 95
- **Interpolation**: Bilinear (cv2.INTER_LINEAR)

- **Color space**: RGB (8-bit per channel)



## Citation



If you use this dataset in your research, please cite the original Neural 3D Video dataset:



```bibtex

@article{neural3dvideo2021,

  title={Neural 3D Video Synthesis},

  author={Author Names},

  journal={Conference/Journal Name},

  year={2021}

}

```



## Processing Scripts



The data was processed using custom scripts available in the parent directory:

- `create_sync_video_with_poses.py` - Single scene processing

- `batch_process_scenes.py` - Batch processing for all scenes



## License



This processed dataset inherits the CC-BY-4.0 license from the original Neural 3D Video dataset. Please respect the license terms when using this data.



## Contact



For questions or issues regarding this processed dataset, please contact the dataset maintainer.