Datasets:
File size: 16,499 Bytes
d5e54ca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 |
# Soul-Bench-Eval User Guide
This project provides an extensible objective evaluation framework for video generation models. Through a unified entry script `evaluate.py`, multiple evaluation subjects can be executed in series on the same batch of generation results, outputting a structured JSON report. The following sections detail the project structure, preparation, overall process, and usage points for each evaluation subject.
## Directory Structure Overview
```
Soul-Bench-Eval/
βββ evaluate.py # Main evaluation script, loads data and runs subjects sequentially
βββ parallel_evaluate.py # Script for parallel execution across multiple GPUs
βββ merge_results.py # Script for merging results from parallel execution
βββ calculate_average.py # Script for calculating average scores from JSON reports
βββ utils.py # Video/image I/O and common utility functions
βββ video.py # `VideoData` data structure definition
βββ subjects/ # Concrete evaluation subject implementations
βββ arcface_consistency.py
βββ av_align.py
βββ latent_sync.py
βββ qwen_vl_vllm.py
βββ video_quality.py
```
Core Process:
1. `evaluate.py` constructs a list of `VideoData` based on the specified input/output directories.
2. Imports modules from the `subjects/` directory based on the `--evaluate_subjects` argument and calls their `evaluate` functions.
3. Each subject writes evaluation results to `VideoData` (`register_result`).
4. All results are finally saved as `evaluation_results.json`.
The `VideoData` object automatically associates reference images (`.png`), audio (`.wav`), and text (`.txt`) with the same name as the video, and provides convenient methods like `has_image/has_audio/has_text` for subjects to call.
## Environment Preparation
- Python 3.10 and above (recommended version compatible with dependencies).
- Common dependencies (can be installed via `pip install -r requirements.txt`):
- `torch`, `torchvision`
- `transformers`, `tqdm`, `numpy`
- `decord`, `opencv-python`, `pillow`
- Each subject may require additional third-party libraries or model weights, see below for details.
## Quick Start
1. **Prepare Evaluation Data**
Your data should follow this structure:
```
data_example/
βββ soul-input/ # Reference materials
β βββ 0016_human_talk_en_female.json
β βββ 0016_human_talk_en_female.png
β βββ 0016_human_talk_en_female.wav
β βββ ...
βββ soul-results/ # Generated videos
βββ InfiniteTalk/ # Model name (optional subdirectory)
βββ 0016_human_talk_en_female.mp4
βββ ...
```
- `--model_input_dir`: Folder containing reference materials, including `.png` (reference image), `.wav` (reference audio), `.txt` or `.json` (text prompt) with the same name as the video. Can be omitted if not needed.
- `--model_output_dir`: Video results generated by the model to be evaluated, extension must be `.mp4`.
2. **Prepare Environment**
Different evaluation subjects require different pre-trained models or third-party repositories:
**Identity Consistence (ArcFace)**
- **Reference**: [InsightFace](https://github.com/deepinsight/insightface)
- **Setup**:
```bash
pip install insightface
# Models will be auto-downloaded on first run to ~/.insightface/models/
```
**LSE-D/LSE-C (Latent Sync)**
- **Reference**: [LatentSync](https://github.com/bytedance/LatentSync)
- **Setup**:
```bash
# Clone repository
git clone https://github.com/bytedance/LatentSync.git third_party/LatentSync
cd third_party/LatentSync
# Install dependencies
source setup_env.sh
# Models will be auto-downloaded from HuggingFace (ByteDance/LatentSync-1.5)
```
**Audio-Video Alignment (AV-Align)**
- **Reference**: Built-in implementation (no external repo needed)
- **Setup**:
```bash
pip install opencv-python librosa numpy
```
**Video-Text Consistence (Qwen-VL)**
- **Reference**: [Qwen3-VL](https://github.com/QwenLM/Qwen3-VL) / [vLLM](https://github.com/vllm-project/vllm)
- **Setup**:
```bash
pip install vllm
# Start vLLM server
python -m vllm.entrypoints.openai.api_server \
--model Qwen/Qwen3-VL-235B-A22B-Thinking \
--served-model-name qwen3-vl \
--trust-remote-code
# Then pass API endpoint in model_args when running evaluation
```
**Video Quality (FineVQ)**
- **Reference**: [FineVQ](https://github.com/IntMeGroup/FineVQ)
- **Setup**:
```bash
# Clone repository
git clone https://github.com/IntMeGroup/FineVQ.git third_party/FineVQ
cd third_party/FineVQ
# Install dependencies
pip install -r requirements.txt
pip install flash-attn==2.3.6 --no-build-isolation
# Download model weights as per FineVQ README (Inference section)
huggingface-cli download IntMeGroup/FineVQ_score --local-dir ./IntMeGroup/FineVQ_score
```
3. **Select Evaluation Subjects**
- Separated by commas, e.g., `--evaluate_subjects arcface_consistency,av_align`
- `--model_args` can provide JSON strings for each subject in the same quantity to pass additional configurations.
4. **Run Example**
```bash
# Using the example data structure above
python evaluate.py \
--model_input_dir data_example/soul-input \
--model_output_dir data_example/soul-results/InfiniteTalk \
--results_dir ./evaluation_results \
--evaluate_subjects arcface_consistency,av_align \
--model_args '{}','{}'
# Or with custom paths
python evaluate.py \
--model_input_dir /path/to/inputs \
--model_output_dir /path/to/outputs \
--results_dir ./evaluation_results \
--evaluate_subjects arcface_consistency,av_align \
--model_args '{}','{}'
```
Common arguments:
- `--device`: Default `cuda`, some subjects support `cpu`.
- `--batch_size`: Batch size for processing frames, default `16`.
- `--sampling`: Number of frames to sample per video. Default samples 16 frames; change `DEFAULT_ALL_FRAMES` to `True` to use all frames by default.
5. **View Results**
- Results for all `VideoData` will be written to `results_dir/evaluation_results.json`.
- Each entry in the JSON contains the video path and the evaluation results for each subject.
## Advanced Usage
### Parallel Evaluation
For large datasets, you can use `parallel_evaluate.py` to distribute tasks across multiple GPUs.
```bash
# Use 8 GPUs, split into 8 groups
python parallel_evaluate.py \
--model_input_dir ./inputs \
--model_output_dir ./outputs \
--evaluate_subjects arcface_consistency \
--group_total 8 \
--num_gpus 8
# Use 4 GPUs, split into 8 groups (2 tasks per GPU)
python parallel_evaluate.py \
--model_input_dir ./inputs \
--model_output_dir ./outputs \
--evaluate_subjects arcface_consistency \
--group_total 8 \
--num_gpus 4 \
--parallelism 4
```
### Result Merging
If you ran evaluations in parallel (or manually split them), use `merge_results.py` to combine the JSON files.
```bash
# Automatically detect and merge results in a directory
python merge_results.py --results_dir ./evaluation_results --subjects arcface_consistency --group_total 8
# Manually specify files to merge
python merge_results.py --input_files result1.json result2.json --output merged.json
```
### Result Analysis
Use `calculate_average.py` to compute average scores for all metrics in the results file.
```bash
# Calculate averages
python calculate_average.py evaluation_results/evaluation_results.json
# Show detailed stats (min, max, etc.)
python calculate_average.py evaluation_results/evaluation_results.json -v
```
## Evaluation Subjects Details
### 1. Identity Consistence (ArcFace)
- **Function**: Uses the InsightFace ArcFace model to measure the similarity between faces in the video and the reference image face.
- **Dependencies**: `insightface`, `opencv-python`, `torch` (GPU support required for faster detection).
- **Key Parameters**:
- `name`: InsightFace model combination, default `buffalo_l`.
- `det_size`: Face detection input size, default `(640, 640)`.
- **Usage Requirements**:
- Must provide the corresponding reference face image (`.png`).
- If no face is detected, an exception will be thrown or the frame will be skipped.
- Results include `arcface_consistency` mean and frame-by-frame similarity list.
- **Usage Example**:
```bash
python evaluate.py \
--model_input_dir /path/to/inputs \
--model_output_dir /path/to/outputs \
--evaluate_subjects arcface_consistency \
--model_args '{"name": "buffalo_l", "det_size": [640, 640]}'
```
### 2. LSE-D/LSE-C (Latent Sync)
- **Function**: Integrates ByteDance's open-source LatentSync project to evaluate lip-sync and speech synchronization.
- **Dependencies**:
- System requires `git`, `ffmpeg` (LatentSync runtime requirements).
- Python packages: `huggingface_hub` (auto download weights), and LatentSync's own dependencies (install according to its README after cloning).
- **Key Parameters**:
- `repo_dir`: LatentSync code location, default `third_party/LatentSync`.
- `force_clone`: Force re-clone if `true`.
- `min_track`: Minimum number of frames for face tracking, default `50`.
- `syncnet_checkpoint`, `huggingface_repo_id`: Custom weight path or source.
- `subject_name`: Custom key name for writing results, default `latent_sync`.
- **Output**: Each video contains fields like `confidence`, `av_offset`, `num_crops`. Records `error` if failed.
- **Usage Example**:
```bash
python evaluate.py \
--model_input_dir /path/to/inputs \
--model_output_dir /path/to/outputs \
--evaluate_subjects latent_sync \
--model_args '{"min_track": 50}'
```
### 3. Audio-Video Alignment (AV-Align)
- **Function**: Uses the AV-Align metric to evaluate the alignment between audio and video modalities. By detecting audio peaks and video peaks (based on optical flow), it calculates their Intersection over Union (IoU) to quantify synchronization. Higher IoU scores indicate better alignment.
- **Dependencies**:
- Python packages: `opencv-python` (`cv2`), `librosa`, `numpy`.
- System tools: `ffmpeg` (used to extract audio from video if not provided separately).
- **Key Parameters** (passed via `--model_args` JSON):
- `subject_name`: Custom key name for writing results, default `av_align`.
- `downsample`: Frame downsampling factor, default `2`. Used to accelerate calculation:
- `1`: No downsampling (slowest, most accurate)
- `2`: Process every other frame (2x speed, recommended)
- `4`: Process every 4th frame (4x speed)
- **Input Requirements**:
- Video file (`.mp4`) required.
- If corresponding audio file (`.wav`) is provided, it will be used directly; otherwise, audio will be extracted from the video automatically.
- **Output Fields**:
- `iou_score`: Audio-video alignment IoU score (between 0-1, higher is better).
- `num_audio_peaks`: Number of detected audio peaks.
- `num_video_peaks`: Number of detected video peaks.
- `fps`: Effective frame rate of the video (considering downsampling).
- `downsample_factor`: Actual downsampling factor used.
- Records `error` field if evaluation fails.
- **Performance Optimization**:
- β
Vectorized calculation: Use NumPy arrays instead of Python lists
- β
Direct grayscale extraction: Avoid repeated BGR->Grayscale conversion
- β
Pre-allocated memory: Reduce dynamic expansion overhead
- β
Frame downsampling: Optional frame skipping for acceleration (default 2x)
- β
Optimized peak detection: Vectorized local maxima finding
- **Technical Details**:
- Audio peak detection: Uses Onset Detection algorithm (librosa).
- Video peak detection: Uses Farneback optical flow to calculate inter-frame motion, local maxima are video peaks (filtering out static scenes with magnitude < 0.1).
- IoU calculation: Each video peak matches at most one audio peak, matching window is Β±1 frame.
- **Usage Example**:
```bash
# Default settings (2x speed)
python evaluate.py \
--model_input_dir /path/to/inputs \
--model_output_dir /path/to/outputs \
--evaluate_subjects av_align \
--model_args '{}'
# Highest precision (no downsampling, slower)
python evaluate.py \
--model_input_dir /path/to/inputs \
--model_output_dir /path/to/outputs \
--evaluate_subjects av_align \
--model_args '{"downsample": 1}'
# Fast mode (4x speed)
python evaluate.py \
--model_input_dir /path/to/inputs \
--model_output_dir /path/to/outputs \
--evaluate_subjects av_align \
--model_args '{"downsample": 4}'
```
### 4. Video-Text Consistence (Qwen-VL)
- **Function**: Calls multimodal large models (default Qwen3-VL) via vLLM to read videos and answer custom prompt questions, recording model output.
- **Dependencies**: `vllm` and its hardware dependencies (CUDA environment, corresponding VRAM requirements).
- **Key Parameters**:
- `model_name`: Default `qwen3-vl`.
- `prompt_template` or `prompt_template_path`: Template for generating text prompts, supports `str.format` placeholders.
- `template_variables`: Optional default variables for the template.
- **Input Requirements**: Video paths must be visible to vLLM (script defaults to allowing local file paths `file://`).
- **Usage Example**:
```bash
python evaluate.py \
--model_input_dir /path/to/inputs \
--model_output_dir /path/to/outputs \
--evaluate_subjects qwen_vl_vllm \
--model_args '{"model_name": "qwen3-vl", "prompt_template": "./video_eval.txt"}'
```
### 5. Video Quality (FineVQ)
- **Function**: Automatically calls the FineVQ project to score video visual quality.
- **Dependencies**:
- System requires `git`, `ffmpeg` (FineVQ dependency environment).
- Python packages: Dependencies in the FineVQ repository will be installed on demand at runtime.
- **Key Parameters**:
- `repo_dir`: FineVQ repository storage path, default `third_party/FineVQ`.
- `repo_url`: Repository address, default official repository.
- `install_dependencies`: Whether to install dependencies on first run, default `true`.
- `force_clone`: Force re-clone repository.
- `per_device_batch_size`, `batch_size`, `nproc_per_node`: Control batch size and process count for distributed inference.
- `use_bf16`: Whether to enable bfloat16, default `true`.
- `env`: Additional environment variables (e.g., `CUDA_VISIBLE_DEVICES`).
- `extra_args`: Append or override FineVQ CLI arguments as a list.
- **Execution Flow**:
1. Symlink target videos to a temporary directory and generate `meta.json` required by FineVQ.
2. Call FineVQ official `torch.distributed.run` inference script.
3. Parse output CSV and metric files, and write to `video_quality` result field.
- **Output Fields**: `video_quality_score` (usually corresponds to `pred_score`), detailed raw metrics, and optional overall metrics `aggregate_metrics`.
- **Usage Example**:
```bash
python evaluate.py \
--model_input_dir /path/to/inputs \
--model_output_dir /path/to/outputs \
--evaluate_subjects video_quality \
--model_args '{"use_bf16": true}'
```
## Custom Extension
- To add a new evaluation subject, create a Python file with the same name in `subjects/` and export the `evaluate(data_list, device, batch_size, sampling, model_args)` function.
- Custom results for each `VideoData` are written via `register_result(subject_name, payload)` and will eventually appear in the summary JSON.
## FAQ
- **High Memory Usage**: Adjust `--sampling` to reduce the number of sampled frames per video, or modify `DEFAULT_ALL_FRAMES=False`.
- **Missing Dependencies or Models**: Install required Python packages or download weights according to error messages, manually preparing the `checkpoints/` directory if necessary.
- **Evaluation Order**: `evaluate.py` executes in the order specified in `--evaluate_subjects`. Changes to `data_list` by one subject are passed to the next.
Hope this guide helps you quickly understand and use the Soul-Bench-Eval framework. For further customization or troubleshooting, please refer to the source code of the corresponding subject and adjust according to project needs.
|