lzzzzy commited on
Commit
07a7354
·
1 Parent(s): 633e691

Add HF dataset viewer metadata

Browse files
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .DS_Store
README.md CHANGED
@@ -1,16 +1,22 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
4
 
5
  # AVGen-Bench Generated Videos Data Card
6
 
7
  ## Overview
8
 
9
- This data card describes the generated audio-video outputs organized under `generated_videos/` in AVGen-Bench.
10
 
11
  The collection is intended for **benchmarking and qualitative/quantitative evaluation** of text-to-audio-video (T2AV) systems. It is not a training dataset. Each item is a model-generated video produced from a prompt defined in `prompts/*.json`.
12
 
13
- Code repository: https://github.com/microsoft/AVGen-Bench
 
 
14
 
15
  ## What This Dataset Contains
16
 
@@ -23,28 +29,28 @@ The dataset is organized by:
23
  A typical top-level structure is:
24
 
25
  ```text
26
- AVGen-Bench/
27
- ├── generated_videos/
28
- ├── Kling_2.6/
29
- ├── LTX-2/
30
- ├── LTX-2.3/
31
- ├── MOVA_360p_Emu3.5/
32
- ├── MOVA_360p_NanoBanana_2/
33
- ├── Ovi_11/
34
- ├── Seedance_1.5_pro/
35
- ├── Sora_2/
36
- ├── Veo_3.1_fast/
37
- ├── Veo_3.1_quality/
38
- ├── Wan_2.2_HunyuanVideo-Foley/
39
- │ └── Wan_2.6/
40
- ├── prompts/
41
- └── generated_images/ # optional, depending on generation pipeline
42
  ```
43
 
44
  Within each model directory, videos are grouped by category, for example:
45
 
46
  ```text
47
- generated_videos/veo3.1_fast/
48
  ├── ads/
49
  ├── animals/
50
  ├── asmr/
@@ -90,7 +96,7 @@ Each generated item is typically:
90
 
91
  - A single `.mp4` file
92
  - Containing model-generated video and, when supported by the model/pipeline, synthesized audio
93
- - Stored under `generated_videos/<model>/<category>/`
94
 
95
  The filename is usually derived from prompt content after sanitization. Exact naming may vary by generation script or provider wrapper.
96
  In the standard export pipeline, the filename is derived from the prompt's `content` field using the following logic:
@@ -109,9 +115,18 @@ def safe_filename(name: str, max_len: int = 180) -> str:
109
 
110
  So the expected output path pattern is:
111
 
112
- ```text
113
- generated_videos/<model>/<category>/<safe_filename(content)>.mp4
114
- ```
 
 
 
 
 
 
 
 
 
115
 
116
  ## How The Data Was Produced
117
 
 
1
+ ---
2
+ license: mit
3
+ configs:
4
+ - config_name: default
5
+ data_dir: .
6
+ default: true
7
+ ---
8
 
9
  # AVGen-Bench Generated Videos Data Card
10
 
11
  ## Overview
12
 
13
+ This data card describes the generated audio-video outputs stored directly in the repository root by model directory.
14
 
15
  The collection is intended for **benchmarking and qualitative/quantitative evaluation** of text-to-audio-video (T2AV) systems. It is not a training dataset. Each item is a model-generated video produced from a prompt defined in `prompts/*.json`.
16
 
17
+ Code repository: https://github.com/microsoft/AVGen-Bench
18
+
19
+ For Hugging Face Hub compatibility, the repository includes a root-level `metadata.parquet` file so the Dataset Viewer can expose each video as a structured row with prompt metadata instead of treating the repo as an unindexed file dump.
20
 
21
  ## What This Dataset Contains
22
 
 
29
  A typical top-level structure is:
30
 
31
  ```text
32
+ AVGen-Bench/
33
+ ├── Kling_2.6/
34
+ ├── LTX-2/
35
+ ├── LTX-2.3/
36
+ ├── MOVA_360p_Emu3.5/
37
+ ├── MOVA_360p_NanoBanana_2/
38
+ ├── Ovi_11/
39
+ ├── Seedance_1.5_pro/
40
+ ├── Sora_2/
41
+ ├── Veo_3.1_fast/
42
+ ├── Veo_3.1_quality/
43
+ ├── Wan_2.2_HunyuanVideo-Foley/
44
+ ├── Wan_2.6/
45
+ ── metadata.parquet
46
+ ├── prompts/
47
+ └── reference_image/ # optional, depending on generation pipeline
48
  ```
49
 
50
  Within each model directory, videos are grouped by category, for example:
51
 
52
  ```text
53
+ Veo_3.1_fast/
54
  ├── ads/
55
  ├── animals/
56
  ├── asmr/
 
96
 
97
  - A single `.mp4` file
98
  - Containing model-generated video and, when supported by the model/pipeline, synthesized audio
99
+ - Stored under `<model>/<category>/`
100
 
101
  The filename is usually derived from prompt content after sanitization. Exact naming may vary by generation script or provider wrapper.
102
  In the standard export pipeline, the filename is derived from the prompt's `content` field using the following logic:
 
115
 
116
  So the expected output path pattern is:
117
 
118
+ ```text
119
+ <model>/<category>/<safe_filename(content)>.mp4
120
+ ```
121
+
122
+ For Dataset Viewer indexing, `metadata.parquet` stores one row per exported video with:
123
+
124
+ - `file_name`: relative path to the `.mp4`
125
+ - `model`: model directory name
126
+ - `category`: benchmark category
127
+ - `content`: prompt short name
128
+ - `prompt`: full generation prompt
129
+ - `prompt_id`: index inside `prompts/<category>.json`
130
 
131
  ## How The Data Was Produced
132
 
Veo_3.1_fast/.DS_Store DELETED
Binary file (20.5 kB)
 
Veo_3.1_fast/ads/.DS_Store DELETED
Binary file (10.2 kB)
 
Veo_3.1_fast/sports/.DS_Store DELETED
Binary file (8.2 kB)
 
Veo_3.1_quality/.DS_Store DELETED
Binary file (12.3 kB)
 
Veo_3.1_quality/ads/.DS_Store DELETED
Binary file (10.2 kB)
 
Veo_3.1_quality/animals/.DS_Store DELETED
Binary file (8.2 kB)
 
Veo_3.1_quality/asmr/.DS_Store DELETED
Binary file (8.2 kB)
 
Veo_3.1_quality/cooking/.DS_Store DELETED
Binary file (10.2 kB)
 
Veo_3.1_quality/gameplays/.DS_Store DELETED
Binary file (8.2 kB)
 
Veo_3.1_quality/musical_instrument_tutorial/.DS_Store DELETED
Binary file (10.2 kB)
 
Veo_3.1_quality/news/.DS_Store DELETED
Binary file (8.2 kB)
 
metadata.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:236f73fb879f176c4999a60838f53d1dae7a8904b070344d60802f58ecb449c9
3
+ size 112572
prompts/.DS_Store DELETED
Binary file (6.15 kB)
 
scripts/build_hf_metadata.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+
3
+ from __future__ import annotations
4
+
5
+ import argparse
6
+ import json
7
+ import re
8
+ import unicodedata
9
+ from collections import defaultdict
10
+ from pathlib import Path
11
+
12
+ import pyarrow as pa
13
+ import pyarrow.parquet as pq
14
+
15
+
16
+ INVALID_FILENAME_CHARS = r'[/\\:*?"<>|\n\r\t]'
17
+
18
+
19
+ def normalize_text(value: str) -> str:
20
+ return unicodedata.normalize("NFC", str(value).strip())
21
+
22
+
23
+ def safe_filename(name: str, max_len: int = 180) -> str:
24
+ name = normalize_text(name)
25
+ name = re.sub(INVALID_FILENAME_CHARS, "_", name)
26
+ name = re.sub(r"\s+", " ", name).strip()
27
+ if not name:
28
+ name = "untitled"
29
+ if len(name) > max_len:
30
+ name = name[:max_len].rstrip()
31
+ return name
32
+
33
+
34
+ def load_prompt_index(prompts_dir: Path) -> dict[str, dict[str, dict[str, object]]]:
35
+ prompt_index: dict[str, dict[str, dict[str, object]]] = defaultdict(dict)
36
+ duplicates: list[tuple[str, str]] = []
37
+
38
+ for prompt_file in sorted(prompts_dir.glob("*.json")):
39
+ category = prompt_file.stem
40
+ records = json.loads(prompt_file.read_text(encoding="utf-8"))
41
+ for prompt_id, record in enumerate(records):
42
+ content = normalize_text(record["content"])
43
+ key = safe_filename(content)
44
+ if key in prompt_index[category]:
45
+ duplicates.append((category, key))
46
+ prompt_index[category][key] = {
47
+ "content": content,
48
+ "prompt": normalize_text(record["prompt"]),
49
+ "prompt_id": prompt_id,
50
+ }
51
+
52
+ if duplicates:
53
+ duplicate_lines = "\n".join(f"- {category}: {key}" for category, key in duplicates[:20])
54
+ raise ValueError(f"Duplicate prompt filenames detected:\n{duplicate_lines}")
55
+
56
+ return prompt_index
57
+
58
+
59
+ def build_rows(repo_root: Path) -> list[dict[str, object]]:
60
+ prompt_index = load_prompt_index(repo_root / "prompts")
61
+ rows: list[dict[str, object]] = []
62
+
63
+ for video_path in sorted(repo_root.glob("*/*/*.mp4")):
64
+ model = video_path.parts[-3]
65
+ category = video_path.parts[-2]
66
+ filename = normalize_text(video_path.stem)
67
+ prompt_record = prompt_index.get(category, {}).get(filename)
68
+ rel_path = video_path.relative_to(repo_root).as_posix()
69
+
70
+ rows.append(
71
+ {
72
+ "file_name": rel_path,
73
+ "model": model,
74
+ "category": category,
75
+ "content": prompt_record["content"] if prompt_record else filename,
76
+ "prompt": prompt_record["prompt"] if prompt_record else None,
77
+ "prompt_id": prompt_record["prompt_id"] if prompt_record else None,
78
+ "matched_prompt": prompt_record is not None,
79
+ }
80
+ )
81
+
82
+ return rows
83
+
84
+
85
+ def write_parquet(rows: list[dict[str, object]], output_path: Path) -> None:
86
+ table = pa.Table.from_pylist(rows)
87
+ pq.write_table(table, output_path)
88
+
89
+
90
+ def main() -> None:
91
+ parser = argparse.ArgumentParser(
92
+ description="Build metadata.parquet for Hugging Face Dataset Viewer."
93
+ )
94
+ parser.add_argument(
95
+ "--repo-root",
96
+ type=Path,
97
+ default=Path(__file__).resolve().parents[1],
98
+ help="Path to the dataset repository root.",
99
+ )
100
+ parser.add_argument(
101
+ "--output",
102
+ type=Path,
103
+ default=None,
104
+ help="Output parquet path. Defaults to <repo-root>/metadata.parquet.",
105
+ )
106
+ args = parser.parse_args()
107
+
108
+ repo_root = args.repo_root.resolve()
109
+ output_path = args.output.resolve() if args.output else repo_root / "metadata.parquet"
110
+
111
+ rows = build_rows(repo_root)
112
+ if not rows:
113
+ raise ValueError(f"No MP4 files found under {repo_root}")
114
+
115
+ write_parquet(rows, output_path)
116
+
117
+ matched = sum(1 for row in rows if row["matched_prompt"])
118
+ print(f"Wrote {len(rows)} rows to {output_path}")
119
+ print(f"Matched prompts: {matched}/{len(rows)}")
120
+
121
+
122
+ if __name__ == "__main__":
123
+ main()