File size: 5,709 Bytes
07a7354
 
 
 
 
 
 
633e691
 
 
 
 
07a7354
633e691
 
 
07a7354
 
 
633e691
 
 
 
 
 
 
 
 
 
 
 
07a7354
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
633e691
 
 
 
 
07a7354
633e691
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07a7354
633e691
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07a7354
 
 
 
 
 
 
 
 
 
 
 
633e691
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
---
license: mit
configs:
  - config_name: default
    data_dir: .
    default: true
---

# AVGen-Bench Generated Videos Data Card

## Overview

This data card describes the generated audio-video outputs stored directly in the repository root by model directory.

The collection is intended for **benchmarking and qualitative/quantitative evaluation** of text-to-audio-video (T2AV) systems. It is not a training dataset. Each item is a model-generated video produced from a prompt defined in `prompts/*.json`.

Code repository: https://github.com/microsoft/AVGen-Bench

For Hugging Face Hub compatibility, the repository includes a root-level `metadata.parquet` file so the Dataset Viewer can expose each video as a structured row with prompt metadata instead of treating the repo as an unindexed file dump.

## What This Dataset Contains

The dataset is organized by:

1. Model directory
2. Video category
3. Generated `.mp4` files

A typical top-level structure is:

```text

AVGen-Bench/
├── Kling_2.6/
├── LTX-2/
├── LTX-2.3/
├── MOVA_360p_Emu3.5/
├── MOVA_360p_NanoBanana_2/
├── Ovi_11/
├── Seedance_1.5_pro/
├── Sora_2/
├── Veo_3.1_fast/
├── Veo_3.1_quality/
├── Wan_2.2_HunyuanVideo-Foley/
├── Wan_2.6/
├── metadata.parquet
├── prompts/
└── reference_image/   # optional, depending on generation pipeline
```

Within each model directory, videos are grouped by category, for example:

```text

Veo_3.1_fast/
├── ads/

├── animals/

├── asmr/

├── chemical_reaction/      

├── cooking/

├── gameplays/

├── movie_trailer/

├── musical_instrument_tutorial/

├── news/

├── physical_experiment/

└── sports/

```

## Prompt Coverage

Prompt definitions are stored in `prompts/*.json`.

The current prompt set contains **235 prompts** across **11 categories**:

| Category | Prompt count |
|---|---:|
| `ads` | 20 |
| `animals` | 20 |
| `asmr` | 20 |
| `chemical_reaction` | 20 |
| `cooking` | 20 |
| `gameplays` | 20 |
| `movie_trailer` | 20 |
| `musical_instrument_tutorial` | 35 |
| `news` | 20 |
| `physical_experiment` | 20 |
| `sports` | 20 |

Prompt JSON entries typically contain:

- `content`: a short content descriptor used for naming or indexing
- `prompt`: the full generation prompt


## Data Instance Format

Each generated item is typically:

- A single `.mp4` file
- Containing model-generated video and, when supported by the model/pipeline, synthesized audio
- Stored under `<model>/<category>/`

The filename is usually derived from prompt content after sanitization. Exact naming may vary by generation script or provider wrapper.
In the standard export pipeline, the filename is derived from the prompt's `content` field using the following logic:

```python

def safe_filename(name: str, max_len: int = 180) -> str:

    name = str(name).strip()

    name = re.sub(r"[/\\:*?\"<>|\\n\\r\\t]", "_", name)

    name = re.sub(r"\\s+", " ", name).strip()

    if not name:

        name = "untitled"

    if len(name) > max_len:

        name = name[:max_len].rstrip()

    return name

```

So the expected output path pattern is:

```text
<model>/<category>/<safe_filename(content)>.mp4
```

For Dataset Viewer indexing, `metadata.parquet` stores one row per exported video with:

- `file_name`: relative path to the `.mp4`
- `model`: model directory name
- `category`: benchmark category
- `content`: prompt short name
- `prompt`: full generation prompt
- `prompt_id`: index inside `prompts/<category>.json`

## How The Data Was Produced

The videos were generated by running different T2AV systems on a shared benchmark prompt set.

Important properties:

- All systems are evaluated against the same category structure
- Outputs are model-generated rather than human-recorded
- Different models may expose different generation settings, resolutions, or conditioning mechanisms
- Some pipelines may additionally use first-frame or reference-image inputs, depending on the underlying model

## Intended Uses

This dataset is intended for:

- Benchmarking T2AV generation systems
- Running AVGen-Bench evaluation scripts
- Comparing failure modes across models
- Qualitative demo curation
- Error analysis by category or prompt type

## Out-of-Scope Uses

This dataset is not intended for:

- Training a general-purpose video generation model
- Treating model outputs as factual evidence of real-world events
- Safety certification of a model without additional testing
- Any claim that benchmark performance fully captures downstream deployment quality

## Known Limitations

- Outputs are synthetic and inherit the biases and failure modes of the generating models
- Some categories emphasize benchmark stress-testing rather than natural real-world frequency
- File availability may vary across models if a generation job failed, timed out, or was filtered
- Different model providers enforce different safety and moderation policies; some prompts may be rejected during provider-side review, which can lead to missing videos for specific models even when the prompt exists in the benchmark


## Risks and Responsible Use

Because these are generated videos:

- Visual realism does not imply factual correctness
- Audio may contain artifacts, intelligibility failures, or misleading synchronization
- Generated content may reflect stereotypes, implausible causal structure, or unsafe outputs inherited from upstream models

Anyone redistributing results should clearly label them as synthetic model outputs.