|
|
--- |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data.jsonl |
|
|
task_categories: |
|
|
- visual-question-answering |
|
|
- video-classification |
|
|
language: |
|
|
- en |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
|
|
|
# CameraBench Binary Evaluation Dataset |
|
|
|
|
|
A balanced VQA dataset for evaluating camera motion understanding in videos. |
|
|
|
|
|
## π Dataset Statistics |
|
|
|
|
|
- **Total Questions**: 384 |
|
|
- **Unique Videos**: 119 |
|
|
- **Unique Questions**: 31 |
|
|
- **Yes Answers**: 192 (50.0%) |
|
|
- **No Answers**: 192 (50.0%) |
|
|
- **Balance Ratio**: 1.00 |
|
|
- **Total Size**: 126.16 MB (0.12 GB) |
|
|
- **Average Video Size**: 1.06 MB |
|
|
|
|
|
## π― Task Categories |
|
|
|
|
|
This dataset covers various camera motion tasks including: |
|
|
|
|
|
- **Static**: 42 questions |
|
|
- **Move In**: 29 questions |
|
|
- **Pan Left**: 24 questions |
|
|
- **Tilt Up**: 24 questions |
|
|
- **Move Out**: 21 questions |
|
|
- **Move Right**: 19 questions |
|
|
- **Roll Counterclockwise**: 18 questions |
|
|
- **Pan Right**: 17 questions |
|
|
- **Zoom Out**: 16 questions |
|
|
- **Move Left**: 16 questions |
|
|
- **Has Pan Left**: 15 questions |
|
|
- **Roll Clockwise**: 15 questions |
|
|
- **Zoom In**: 14 questions |
|
|
- **Tilt Down**: 14 questions |
|
|
- **Is The Fixed Camera Shaking Or Not**: 13 questions |
|
|
- **Has Forward Motion**: 13 questions |
|
|
- **Has Pan Right**: 12 questions |
|
|
- **Is Scene Static Or Not**: 11 questions |
|
|
- **Move Up**: 11 questions |
|
|
- **Move Down**: 11 questions |
|
|
- **Is The Camera Stable Or Shaky**: 9 questions |
|
|
- **Has Truck Left**: 8 questions |
|
|
- **Has Backward Motion**: 7 questions |
|
|
- **Has Truck Right**: 6 questions |
|
|
- **Has Forward Vs Backward Ground**: 4 questions |
|
|
- **Has Zoom Out Not Move Vs Has Move Not Zoom Out**: 2 questions |
|
|
- **Is Camera Movement Slow Or Fast**: 2 questions |
|
|
|
|
|
## π Dataset Format |
|
|
|
|
|
The dataset consists of: |
|
|
- `videos/`: Directory containing all MP4 video files |
|
|
- `metadata.jsonl`: JSONL file with question annotations |
|
|
|
|
|
Each record in `metadata.jsonl` contains: |
|
|
- `video_name`: Original video filename |
|
|
- `video_path`: Relative path to video file (e.g., `videos/video.mp4`) |
|
|
- `question`: Binary question about camera motion |
|
|
- `label`: Answer ("Yes" or "No") |
|
|
- `task`: Task category |
|
|
- `label_name`: Detailed label identifier |
|
|
|
|
|
## π Usage |
|
|
|
|
|
### Loading the Dataset |
|
|
|
|
|
```python |
|
|
import json |
|
|
import os |
|
|
|
|
|
# Load metadata |
|
|
metadata = [] |
|
|
with open("metadata.jsonl", "r") as f: |
|
|
for line in f: |
|
|
metadata.append(json.loads(line)) |
|
|
|
|
|
# Access a sample |
|
|
sample = metadata[0] |
|
|
print(f"Question: {sample['question']}") |
|
|
print(f"Answer: {sample['label']}") |
|
|
print(f"Task: {sample['task']}") |
|
|
print(f"Video path: {sample['video_path']}") |
|
|
``` |
|
|
|
|
|
### Downloading the Dataset |
|
|
|
|
|
Download the entire dataset using huggingface-cli or git: |
|
|
|
|
|
```bash |
|
|
# Using huggingface-cli |
|
|
huggingface-cli download tuhink/cambench_binary_eval --repo-type dataset --local-dir ./cambench_data |
|
|
|
|
|
# Or using git |
|
|
git clone https://huggingface.co/datasets/tuhink/cambench_binary_eval |
|
|
``` |
|
|
|
|
|
This will download all videos and metadata to your local machine. |
|
|
|
|
|
### Loading Videos |
|
|
|
|
|
```python |
|
|
import json |
|
|
import cv2 |
|
|
|
|
|
# Load metadata |
|
|
with open("metadata.jsonl", "r") as f: |
|
|
metadata = [json.loads(line) for line in f] |
|
|
|
|
|
# Load a video |
|
|
sample = metadata[0] |
|
|
video_path = sample['video_path'] # e.g., "videos/video_name.mp4" |
|
|
|
|
|
# Use OpenCV to read the video |
|
|
cap = cv2.VideoCapture(video_path) |
|
|
while cap.isOpened(): |
|
|
ret, frame = cap.read() |
|
|
if not ret: |
|
|
break |
|
|
# Process frame |
|
|
pass |
|
|
cap.release() |
|
|
``` |
|
|
|
|
|
### Batch Processing |
|
|
|
|
|
For evaluation tasks: |
|
|
|
|
|
```python |
|
|
import json |
|
|
|
|
|
# Load all questions |
|
|
with open("metadata.jsonl", "r") as f: |
|
|
dataset = [json.loads(line) for line in f] |
|
|
|
|
|
correct = 0 |
|
|
total = 0 |
|
|
|
|
|
for sample in dataset: |
|
|
video_path = sample['video_path'] |
|
|
question = sample['question'] |
|
|
ground_truth = sample['label'] |
|
|
|
|
|
# Your model inference here |
|
|
# prediction = your_model(video_path, question) |
|
|
|
|
|
# if prediction == ground_truth: |
|
|
# correct += 1 |
|
|
# total += 1 |
|
|
|
|
|
# accuracy = correct / total if total > 0 else 0 |
|
|
# print(f"Accuracy: {accuracy:.2%}") |
|
|
``` |
|
|
|
|
|
### Using with HuggingFace Datasets Library |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the dataset |
|
|
dataset = load_dataset("tuhink/cambench_binary_eval") |
|
|
|
|
|
# Access samples |
|
|
for sample in dataset['train']: |
|
|
print(f"Question: {sample['question']}") |
|
|
print(f"Answer: {sample['label']}") |
|
|
print(f"Video: {sample['video_path']}") |
|
|
``` |
|
|
|
|
|
## π Evaluation |
|
|
|
|
|
This dataset is designed for binary classification tasks. Evaluate your model using: |
|
|
- Accuracy |
|
|
- Precision/Recall |
|
|
- F1 Score |
|
|
- Per-task performance |
|
|
|
|
|
## π License |
|
|
|
|
|
Please refer to the original CameraBench dataset for licensing information. |
|
|
|
|
|
## π Citation |
|
|
|
|
|
If you use this dataset, please cite the original CameraBench paper. |
|
|
|
|
|
## π§ Contact |
|
|
|
|
|
For questions or issues, please open an issue on the repository. |
|
|
|
|
|
--- |
|
|
|
|
|
**Note**: All videos are provided in original MP4 format. The dataset maintains temporal dynamics for accurate camera motion evaluation. |
|
|
|