Upload folder using huggingface_hub
Browse files- E2E_VP_default_test.json +0 -0
- README.md +116 -0
- convert.py +83 -0
E2E_VP_default_test.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
README.md
ADDED
|
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
configs:
|
| 3 |
+
- config_name: default
|
| 4 |
+
data_files:
|
| 5 |
+
- split: test
|
| 6 |
+
path: E2E_VP_default_test.json
|
| 7 |
+
license: mit
|
| 8 |
+
task_categories:
|
| 9 |
+
- video-classification
|
| 10 |
+
- question-answering
|
| 11 |
+
language:
|
| 12 |
+
- en
|
| 13 |
+
tags:
|
| 14 |
+
- video
|
| 15 |
+
- spatial-reasoning
|
| 16 |
+
- direction
|
| 17 |
+
- synthetic
|
| 18 |
+
- VideoLLM
|
| 19 |
+
pretty_name: E2E 20K Synthetic Direction
|
| 20 |
+
size_categories:
|
| 21 |
+
- 10K<n<100K
|
| 22 |
+
---
|
| 23 |
+
|
| 24 |
+
# E2E 20K Synthetic Direction
|
| 25 |
+
|
| 26 |
+
A large-scale synthetic video benchmark for evaluating VideoLLMs' directional reasoning.
|
| 27 |
+
20,000 videos of **colored geometric shapes** moving in four cardinal directions, with automatically generated MCQ annotations.
|
| 28 |
+
|
| 29 |
+
## Directions
|
| 30 |
+
|
| 31 |
+
| Direction | Description |
|
| 32 |
+
|---|---|
|
| 33 |
+
| `up` | Object moves toward the top of the frame |
|
| 34 |
+
| `down` | Object moves toward the bottom of the frame |
|
| 35 |
+
| `left` | Object moves toward the left of the frame |
|
| 36 |
+
| `right` | Object moves toward the right of the frame |
|
| 37 |
+
|
| 38 |
+
## Usage
|
| 39 |
+
|
| 40 |
+
```python
|
| 41 |
+
from datasets import load_dataset
|
| 42 |
+
|
| 43 |
+
ds = load_dataset("YOUR_HF_ID/E2E_20K", split="train")
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
## Data Format
|
| 47 |
+
|
| 48 |
+
```json
|
| 49 |
+
{
|
| 50 |
+
"id": 0,
|
| 51 |
+
"video": "synthetic20k/train/down/circle_blue_001.mp4",
|
| 52 |
+
"question": "Which direction does the blue circle move (camera perspective)?",
|
| 53 |
+
"candidates": ["Left", "Down", "Right", "Up"],
|
| 54 |
+
"answer": "B",
|
| 55 |
+
"answer_text": "Down",
|
| 56 |
+
"direction": "down"
|
| 57 |
+
}
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
| Field | Type | Description |
|
| 61 |
+
|---|---|---|
|
| 62 |
+
| `id` | int | 고유 인덱스 |
|
| 63 |
+
| `video` | str | 비디오 상대 경로 |
|
| 64 |
+
| `question` | str | 방향 질문 (question template 랜덤 샘플링) |
|
| 65 |
+
| `candidates` | list[str] | 4개 보기 (순서 랜덤 셔플) |
|
| 66 |
+
| `answer` | str | 정답 보기 레이블 (`A`/`B`/`C`/`D`) |
|
| 67 |
+
| `answer_text` | str | 정답 텍스트 (`Up` / `Down` / `Left` / `Right`) |
|
| 68 |
+
| `direction` | str | GT 방향 소문자 |
|
| 69 |
+
|
| 70 |
+
## Video Structure
|
| 71 |
+
|
| 72 |
+
```
|
| 73 |
+
synthetic20k/
|
| 74 |
+
├── train/
|
| 75 |
+
│ ├── up/
|
| 76 |
+
│ │ ├── circle_blue_001.mp4
|
| 77 |
+
│ │ ├── circle_red_001.mp4
|
| 78 |
+
│ │ └── ...
|
| 79 |
+
│ ├── down/
|
| 80 |
+
│ ├── left/
|
| 81 |
+
│ └── right/
|
| 82 |
+
└── val/
|
| 83 |
+
├── up/
|
| 84 |
+
├── down/
|
| 85 |
+
├── left/
|
| 86 |
+
└── right/
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
## Shapes & Colors
|
| 90 |
+
|
| 91 |
+
**Shapes:** circle, square, triangle
|
| 92 |
+
**Colors:** blue, red, green, yellow, white
|
| 93 |
+
|
| 94 |
+
## Statistics
|
| 95 |
+
|
| 96 |
+
| Split | Total | Up | Down | Left | Right |
|
| 97 |
+
|---|---|---|---|---|---|
|
| 98 |
+
| train | 16,000 | 4,000 | 4,000 | 4,000 | 4,000 |
|
| 99 |
+
| val | 4,000 | 1,000 | 1,000 | 1,000 | 1,000 |
|
| 100 |
+
| **total** | **20,000** | **5,000** | **5,000** | **5,000** | **5,000** |
|
| 101 |
+
|
| 102 |
+
## Question Templates
|
| 103 |
+
|
| 104 |
+
Each sample draws from a pool of question templates to reduce template bias:
|
| 105 |
+
|
| 106 |
+
- *"Which direction does the {color} {shape} move (camera perspective)?"*
|
| 107 |
+
- *"As the clip progresses, which direction does the {color} {shape} move?"*
|
| 108 |
+
- *"Which direction does the {color} {shape} move within the frame?"*
|
| 109 |
+
- *"Which way does the {color} {shape} move in the image plane?"*
|
| 110 |
+
- *(and more...)*
|
| 111 |
+
|
| 112 |
+
## Related Datasets
|
| 113 |
+
|
| 114 |
+
| Dataset | Description |
|
| 115 |
+
|---|---|
|
| 116 |
+
| [E2E_real_object](#) | Real-world objects, same direction task, smaller scale |
|
convert.py
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import re
|
| 3 |
+
import argparse
|
| 4 |
+
|
| 5 |
+
def parse_content(content: str):
|
| 6 |
+
"""messages[0].content에서 question과 choices 파싱"""
|
| 7 |
+
# Question 추출
|
| 8 |
+
question_match = re.search(r'Question: (.+?)\n', content)
|
| 9 |
+
question = question_match.group(1).strip() if question_match else ""
|
| 10 |
+
|
| 11 |
+
# Choices 추출: "- The X is moving Y." 형태에서 Y만 뽑기
|
| 12 |
+
choices_raw = re.findall(r'- .+ is moving (\w+)\.', content)
|
| 13 |
+
candidates = [c.capitalize() for c in choices_raw]
|
| 14 |
+
|
| 15 |
+
return question, candidates
|
| 16 |
+
|
| 17 |
+
import random
|
| 18 |
+
|
| 19 |
+
def convert(input_path: str, output_path: str, video_prefix: str, ratio: float = 1.0):
|
| 20 |
+
with open(input_path, "r") as f:
|
| 21 |
+
data = json.load(f)
|
| 22 |
+
|
| 23 |
+
if not (0.0 < ratio <= 1.0):
|
| 24 |
+
raise ValueError(f"--ratio는 0.0 초과 1.0 이하여야 합니다. (입력값: {ratio})")
|
| 25 |
+
|
| 26 |
+
if ratio < 1.0:
|
| 27 |
+
k = max(1, int(len(data) * ratio))
|
| 28 |
+
data = random.sample(data, k)
|
| 29 |
+
print(f"🎲 {len(data)}개 샘플링 (ratio={ratio})")
|
| 30 |
+
|
| 31 |
+
results = []
|
| 32 |
+
for idx, item in enumerate(data):
|
| 33 |
+
content = item["messages"][0]["content"]
|
| 34 |
+
assistant_text = item["messages"][1]["content"] # e.g. "The blue circle is moving down."
|
| 35 |
+
video_src = item["videos"][0] # e.g. "synthetic20k/train/down/circle_blue_001.mp4"
|
| 36 |
+
|
| 37 |
+
question, candidates = parse_content(content)
|
| 38 |
+
|
| 39 |
+
# answer_text: assistant 답변에서 방향 추출
|
| 40 |
+
dir_match = re.search(r'moving (\w+)\.', assistant_text)
|
| 41 |
+
answer_text = dir_match.group(1).capitalize() if dir_match else ""
|
| 42 |
+
direction = answer_text.lower()
|
| 43 |
+
|
| 44 |
+
# answer: candidates 내 위치 → A/B/C/D
|
| 45 |
+
try:
|
| 46 |
+
answer_idx = candidates.index(answer_text)
|
| 47 |
+
answer = chr(ord("A") + answer_idx)
|
| 48 |
+
except ValueError:
|
| 49 |
+
answer = "?"
|
| 50 |
+
|
| 51 |
+
# video 경로: 접두사 교체 (train/val 포함)
|
| 52 |
+
# e.g. synthetic20k/train/down/circle_blue_001.mp4
|
| 53 |
+
# -> E2E_VP_default/val/down/circle_blue_001.mp4
|
| 54 |
+
parts = video_src.split("/") # ['synthetic20k', 'train', 'down', 'circle_blue_001.mp4']
|
| 55 |
+
new_video = "/".join([video_prefix] + parts[2:]) # prefix + down/circle_blue_001.mp4
|
| 56 |
+
|
| 57 |
+
results.append({
|
| 58 |
+
"id": idx,
|
| 59 |
+
"video": new_video,
|
| 60 |
+
"question": question,
|
| 61 |
+
"candidates": candidates,
|
| 62 |
+
"answer": answer,
|
| 63 |
+
"answer_text": answer_text,
|
| 64 |
+
"direction": direction,
|
| 65 |
+
})
|
| 66 |
+
|
| 67 |
+
with open(output_path, "w") as f:
|
| 68 |
+
json.dump(results, f, indent=2, ensure_ascii=False)
|
| 69 |
+
|
| 70 |
+
print(f"✅ {len(results)}개 변환 완료 → {output_path}")
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
if __name__ == "__main__":
|
| 74 |
+
parser = argparse.ArgumentParser()
|
| 75 |
+
parser.add_argument("--input", required=True, help="원본 JSON 경로")
|
| 76 |
+
parser.add_argument("--output", required=True, help="출력 JSON 경로")
|
| 77 |
+
parser.add_argument("--prefix", default="E2E_VP_default/val",
|
| 78 |
+
help="새 video 경로 접두사 (default: E2E_VP_default/val)")
|
| 79 |
+
parser.add_argument("--ratio", type=float, default=1.0,
|
| 80 |
+
help="샘플링 비율 0.0~1.0 (default: 1.0 = 전체)")
|
| 81 |
+
args = parser.parse_args()
|
| 82 |
+
|
| 83 |
+
convert(args.input, args.output, args.prefix, args.ratio)
|