File size: 2,930 Bytes
e6bbd0b
03f6a5c
e6bbd0b
9a60541
 
 
 
889443b
03f6a5c
b748129
4b77b53
b748129
4b77b53
02e6674
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b77b53
ef0983f
 
4b77b53
f159dae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ef0983f
f159dae
 
 
 
 
 
 
 
9a60541
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f159dae
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: mit
configs:
  - config_name: default
    data_files:
    - split: test
      path: data/test-*.parquet

---
# CameraBench optical flow dataset

A balanced VQA dataset for evaluating camera motion understanding in videos.

## 📊 Dataset Statistics

- **Total Questions**: 249
- **Unique Videos**: 70
- **Unique Questions**: 13
- **Yes Answers**: 89 (35.7%)
- **No Answers**: 160 (64.3%)
- **Balance Ratio**: 0.56
- **Total Size**: 258.40 MB (0.25 GB)
- **Average Video Size**: 3.69 MB

## 🎯 Task Categories

This dataset covers various camera motion tasks.

## 📝 Dataset Format

The dataset consists of MP4 video files with frames and optical flows stored in Parquet format.

Each record contains:
- `video_name`: Original video filename
- `video_path`: Relative path to video file (e.g., `videos/video.mp4`)
- `frames`: Sequence of extracted video frames
- `optical_flows`: Sequence of optical flow visualizations
- `question`: Binary question about camera motion
- `label`: Answer ("Yes" or "No")

<!-- SPLIT-SECTION:train:START -->
## Split: train

### Statistics

- **Total Questions**: 5816
- **Unique Videos**: 207
- **Unique Questions**: 518
- **Yes Answers**: 2908 (50.0%)
- **No Answers**: 2908 (50.0%)
- **Balance Ratio**: 1.0
- **Total Size**: 5073.39 MB (4.95 GB)
- **Average Video Size**: 24.51 MB


### Format: WebDataset

This split uses WebDataset format for efficient streaming:
- **Tar Shards**: 16 tar files
- **Path**: `webdataset/train/train-*.tar`
- **Structure**: Each tar contains frames, optical flows, and metadata in WebDataset format
- **Usage**: Load with `webdataset` library for streaming access

```python
import webdataset as wds

dataset = wds.WebDataset("path/to/train-*.tar").decode("rgb")
for sample in dataset:
    video_name = sample["video_name"]
    frames = [sample[f"frame_{i:04d}.png"] for i in range(sample["num_frames"])]
    flows = [sample[f"flow_{i:04d}.png"] for i in range(sample["num_flows"])]
    # ...process sample...
```

<!-- SPLIT-SECTION:train:END -->





<!-- SPLIT-SECTION:test:START -->
## Split: test

### Statistics

- **Total Questions**: 282
- **Unique Videos**: 72
- **Unique Questions**: 12
- **Yes Answers**: 113 (40.1%)
- **No Answers**: 169 (59.9%)
- **Balance Ratio**: 0.6686390532544378
- **Total Size**: 296.45 MB (0.29 GB)
- **Average Video Size**: 4.12 MB


### Format: Parquet

This split uses Parquet format with embedded images:
- **Path**: `data/test-*.parquet`
- **Structure**: Sharded parquet files with Image columns for frames and optical flows
- **Usage**: Load with `datasets` library for easy access in HuggingFace ecosystem

```python
from datasets import load_dataset

dataset = load_dataset("your-repo-id", split="test")
for sample in dataset:
    frames = sample["frames"]  # List of PIL Images
    flows = sample["optical_flows"]  # List of PIL Images
    # ...process sample...
```

<!-- SPLIT-SECTION:test:END -->