APRIL-AIGC's picture
Upload folder using huggingface_hub
d5e54ca verified

Soul-Bench-Eval User Guide

This project provides an extensible objective evaluation framework for video generation models. Through a unified entry script evaluate.py, multiple evaluation subjects can be executed in series on the same batch of generation results, outputting a structured JSON report. The following sections detail the project structure, preparation, overall process, and usage points for each evaluation subject.

Directory Structure Overview

Soul-Bench-Eval/
β”œβ”€β”€ evaluate.py           # Main evaluation script, loads data and runs subjects sequentially
β”œβ”€β”€ parallel_evaluate.py  # Script for parallel execution across multiple GPUs
β”œβ”€β”€ merge_results.py      # Script for merging results from parallel execution
β”œβ”€β”€ calculate_average.py  # Script for calculating average scores from JSON reports
β”œβ”€β”€ utils.py              # Video/image I/O and common utility functions
β”œβ”€β”€ video.py              # `VideoData` data structure definition
└── subjects/             # Concrete evaluation subject implementations
    β”œβ”€β”€ arcface_consistency.py
    β”œβ”€β”€ av_align.py
    β”œβ”€β”€ latent_sync.py
    β”œβ”€β”€ qwen_vl_vllm.py
    └── video_quality.py

Core Process:

  1. evaluate.py constructs a list of VideoData based on the specified input/output directories.
  2. Imports modules from the subjects/ directory based on the --evaluate_subjects argument and calls their evaluate functions.
  3. Each subject writes evaluation results to VideoData (register_result).
  4. All results are finally saved as evaluation_results.json.

The VideoData object automatically associates reference images (.png), audio (.wav), and text (.txt) with the same name as the video, and provides convenient methods like has_image/has_audio/has_text for subjects to call.

Environment Preparation

  • Python 3.10 and above (recommended version compatible with dependencies).
  • Common dependencies (can be installed via pip install -r requirements.txt):
    • torch, torchvision
    • transformers, tqdm, numpy
    • decord, opencv-python, pillow
  • Each subject may require additional third-party libraries or model weights, see below for details.

Quick Start

  1. Prepare Evaluation Data

    Your data should follow this structure:

    data_example/
    β”œβ”€β”€ soul-input/              # Reference materials
    β”‚   β”œβ”€β”€ 0016_human_talk_en_female.json
    β”‚   β”œβ”€β”€ 0016_human_talk_en_female.png
    β”‚   β”œβ”€β”€ 0016_human_talk_en_female.wav
    β”‚   └── ...
    └── soul-results/            # Generated videos
        └── InfiniteTalk/        # Model name (optional subdirectory)
            β”œβ”€β”€ 0016_human_talk_en_female.mp4
            └── ...
    
    • --model_input_dir: Folder containing reference materials, including .png (reference image), .wav (reference audio), .txt or .json (text prompt) with the same name as the video. Can be omitted if not needed.
    • --model_output_dir: Video results generated by the model to be evaluated, extension must be .mp4.
  2. Prepare Environment

    Different evaluation subjects require different pre-trained models or third-party repositories:

    Identity Consistence (ArcFace)

    • Reference: InsightFace
    • Setup:
      pip install insightface
      # Models will be auto-downloaded on first run to ~/.insightface/models/
      

    LSE-D/LSE-C (Latent Sync)

    • Reference: LatentSync
    • Setup:
      # Clone repository
      git clone https://github.com/bytedance/LatentSync.git third_party/LatentSync
      cd third_party/LatentSync
      
      # Install dependencies
      source setup_env.sh
      
      # Models will be auto-downloaded from HuggingFace (ByteDance/LatentSync-1.5)
      

    Audio-Video Alignment (AV-Align)

    • Reference: Built-in implementation (no external repo needed)
    • Setup:
      pip install opencv-python librosa numpy
      

    Video-Text Consistence (Qwen-VL)

    • Reference: Qwen3-VL / vLLM
    • Setup:
      pip install vllm
      
      # Start vLLM server
      python -m vllm.entrypoints.openai.api_server \
        --model Qwen/Qwen3-VL-235B-A22B-Thinking \
        --served-model-name qwen3-vl \
        --trust-remote-code
      
      # Then pass API endpoint in model_args when running evaluation
      

    Video Quality (FineVQ)

    • Reference: FineVQ
    • Setup:
      # Clone repository
      git clone https://github.com/IntMeGroup/FineVQ.git third_party/FineVQ
      cd third_party/FineVQ
      
      # Install dependencies
      pip install -r requirements.txt
      pip install flash-attn==2.3.6 --no-build-isolation
      
      # Download model weights as per FineVQ README (Inference section)
      huggingface-cli download IntMeGroup/FineVQ_score --local-dir ./IntMeGroup/FineVQ_score
      
  3. Select Evaluation Subjects

    • Separated by commas, e.g., --evaluate_subjects arcface_consistency,av_align
    • --model_args can provide JSON strings for each subject in the same quantity to pass additional configurations.
  4. Run Example

    # Using the example data structure above
    python evaluate.py \
      --model_input_dir data_example/soul-input \
      --model_output_dir data_example/soul-results/InfiniteTalk \
      --results_dir ./evaluation_results \
      --evaluate_subjects arcface_consistency,av_align \
      --model_args '{}','{}'
    
    # Or with custom paths
    python evaluate.py \
      --model_input_dir /path/to/inputs \
      --model_output_dir /path/to/outputs \
      --results_dir ./evaluation_results \
      --evaluate_subjects arcface_consistency,av_align \
      --model_args '{}','{}'
    

    Common arguments:

    • --device: Default cuda, some subjects support cpu.
    • --batch_size: Batch size for processing frames, default 16.
    • --sampling: Number of frames to sample per video. Default samples 16 frames; change DEFAULT_ALL_FRAMES to True to use all frames by default.
  5. View Results

    • Results for all VideoData will be written to results_dir/evaluation_results.json.
    • Each entry in the JSON contains the video path and the evaluation results for each subject.

Advanced Usage

Parallel Evaluation

For large datasets, you can use parallel_evaluate.py to distribute tasks across multiple GPUs.

# Use 8 GPUs, split into 8 groups
python parallel_evaluate.py \
  --model_input_dir ./inputs \
  --model_output_dir ./outputs \
  --evaluate_subjects arcface_consistency \
  --group_total 8 \
  --num_gpus 8

# Use 4 GPUs, split into 8 groups (2 tasks per GPU)
python parallel_evaluate.py \
  --model_input_dir ./inputs \
  --model_output_dir ./outputs \
  --evaluate_subjects arcface_consistency \
  --group_total 8 \
  --num_gpus 4 \
  --parallelism 4

Result Merging

If you ran evaluations in parallel (or manually split them), use merge_results.py to combine the JSON files.

# Automatically detect and merge results in a directory
python merge_results.py --results_dir ./evaluation_results --subjects arcface_consistency --group_total 8

# Manually specify files to merge
python merge_results.py --input_files result1.json result2.json --output merged.json

Result Analysis

Use calculate_average.py to compute average scores for all metrics in the results file.

# Calculate averages
python calculate_average.py evaluation_results/evaluation_results.json

# Show detailed stats (min, max, etc.)
python calculate_average.py evaluation_results/evaluation_results.json -v

Evaluation Subjects Details

1. Identity Consistence (ArcFace)

  • Function: Uses the InsightFace ArcFace model to measure the similarity between faces in the video and the reference image face.
  • Dependencies: insightface, opencv-python, torch (GPU support required for faster detection).
  • Key Parameters:
    • name: InsightFace model combination, default buffalo_l.
    • det_size: Face detection input size, default (640, 640).
  • Usage Requirements:
    • Must provide the corresponding reference face image (.png).
    • If no face is detected, an exception will be thrown or the frame will be skipped.
    • Results include arcface_consistency mean and frame-by-frame similarity list.
  • Usage Example:
    python evaluate.py \
      --model_input_dir /path/to/inputs \
      --model_output_dir /path/to/outputs \
      --evaluate_subjects arcface_consistency \
      --model_args '{"name": "buffalo_l", "det_size": [640, 640]}'
    

2. LSE-D/LSE-C (Latent Sync)

  • Function: Integrates ByteDance's open-source LatentSync project to evaluate lip-sync and speech synchronization.
  • Dependencies:
    • System requires git, ffmpeg (LatentSync runtime requirements).
    • Python packages: huggingface_hub (auto download weights), and LatentSync's own dependencies (install according to its README after cloning).
  • Key Parameters:
    • repo_dir: LatentSync code location, default third_party/LatentSync.
    • force_clone: Force re-clone if true.
    • min_track: Minimum number of frames for face tracking, default 50.
    • syncnet_checkpoint, huggingface_repo_id: Custom weight path or source.
    • subject_name: Custom key name for writing results, default latent_sync.
  • Output: Each video contains fields like confidence, av_offset, num_crops. Records error if failed.
  • Usage Example:
    python evaluate.py \
      --model_input_dir /path/to/inputs \
      --model_output_dir /path/to/outputs \
      --evaluate_subjects latent_sync \
      --model_args '{"min_track": 50}'
    

3. Audio-Video Alignment (AV-Align)

  • Function: Uses the AV-Align metric to evaluate the alignment between audio and video modalities. By detecting audio peaks and video peaks (based on optical flow), it calculates their Intersection over Union (IoU) to quantify synchronization. Higher IoU scores indicate better alignment.
  • Dependencies:
    • Python packages: opencv-python (cv2), librosa, numpy.
    • System tools: ffmpeg (used to extract audio from video if not provided separately).
  • Key Parameters (passed via --model_args JSON):
    • subject_name: Custom key name for writing results, default av_align.
    • downsample: Frame downsampling factor, default 2. Used to accelerate calculation:
      • 1: No downsampling (slowest, most accurate)
      • 2: Process every other frame (2x speed, recommended)
      • 4: Process every 4th frame (4x speed)
  • Input Requirements:
    • Video file (.mp4) required.
    • If corresponding audio file (.wav) is provided, it will be used directly; otherwise, audio will be extracted from the video automatically.
  • Output Fields:
    • iou_score: Audio-video alignment IoU score (between 0-1, higher is better).
    • num_audio_peaks: Number of detected audio peaks.
    • num_video_peaks: Number of detected video peaks.
    • fps: Effective frame rate of the video (considering downsampling).
    • downsample_factor: Actual downsampling factor used.
    • Records error field if evaluation fails.
  • Performance Optimization:
    • βœ… Vectorized calculation: Use NumPy arrays instead of Python lists
    • βœ… Direct grayscale extraction: Avoid repeated BGR->Grayscale conversion
    • βœ… Pre-allocated memory: Reduce dynamic expansion overhead
    • βœ… Frame downsampling: Optional frame skipping for acceleration (default 2x)
    • βœ… Optimized peak detection: Vectorized local maxima finding
  • Technical Details:
    • Audio peak detection: Uses Onset Detection algorithm (librosa).
    • Video peak detection: Uses Farneback optical flow to calculate inter-frame motion, local maxima are video peaks (filtering out static scenes with magnitude < 0.1).
    • IoU calculation: Each video peak matches at most one audio peak, matching window is Β±1 frame.
  • Usage Example:
    # Default settings (2x speed)
    python evaluate.py \
      --model_input_dir /path/to/inputs \
      --model_output_dir /path/to/outputs \
      --evaluate_subjects av_align \
      --model_args '{}'
    
    # Highest precision (no downsampling, slower)
    python evaluate.py \
      --model_input_dir /path/to/inputs \
      --model_output_dir /path/to/outputs \
      --evaluate_subjects av_align \
      --model_args '{"downsample": 1}'
    
    # Fast mode (4x speed)
    python evaluate.py \
      --model_input_dir /path/to/inputs \
      --model_output_dir /path/to/outputs \
      --evaluate_subjects av_align \
      --model_args '{"downsample": 4}'
    

4. Video-Text Consistence (Qwen-VL)

  • Function: Calls multimodal large models (default Qwen3-VL) via vLLM to read videos and answer custom prompt questions, recording model output.
  • Dependencies: vllm and its hardware dependencies (CUDA environment, corresponding VRAM requirements).
  • Key Parameters:
    • model_name: Default qwen3-vl.
    • prompt_template or prompt_template_path: Template for generating text prompts, supports str.format placeholders.
    • template_variables: Optional default variables for the template.
  • Input Requirements: Video paths must be visible to vLLM (script defaults to allowing local file paths file://).
  • Usage Example:
    python evaluate.py \
      --model_input_dir /path/to/inputs \
      --model_output_dir /path/to/outputs \
      --evaluate_subjects qwen_vl_vllm \
      --model_args '{"model_name": "qwen3-vl", "prompt_template": "./video_eval.txt"}'
    

5. Video Quality (FineVQ)

  • Function: Automatically calls the FineVQ project to score video visual quality.
  • Dependencies:
    • System requires git, ffmpeg (FineVQ dependency environment).
    • Python packages: Dependencies in the FineVQ repository will be installed on demand at runtime.
  • Key Parameters:
    • repo_dir: FineVQ repository storage path, default third_party/FineVQ.
    • repo_url: Repository address, default official repository.
    • install_dependencies: Whether to install dependencies on first run, default true.
    • force_clone: Force re-clone repository.
    • per_device_batch_size, batch_size, nproc_per_node: Control batch size and process count for distributed inference.
    • use_bf16: Whether to enable bfloat16, default true.
    • env: Additional environment variables (e.g., CUDA_VISIBLE_DEVICES).
    • extra_args: Append or override FineVQ CLI arguments as a list.
  • Execution Flow:
    1. Symlink target videos to a temporary directory and generate meta.json required by FineVQ.
    2. Call FineVQ official torch.distributed.run inference script.
    3. Parse output CSV and metric files, and write to video_quality result field.
  • Output Fields: video_quality_score (usually corresponds to pred_score), detailed raw metrics, and optional overall metrics aggregate_metrics.
  • Usage Example:
    python evaluate.py \
      --model_input_dir /path/to/inputs \
      --model_output_dir /path/to/outputs \
      --evaluate_subjects video_quality \
      --model_args '{"use_bf16": true}'
    

Custom Extension

  • To add a new evaluation subject, create a Python file with the same name in subjects/ and export the evaluate(data_list, device, batch_size, sampling, model_args) function.
  • Custom results for each VideoData are written via register_result(subject_name, payload) and will eventually appear in the summary JSON.

FAQ

  • High Memory Usage: Adjust --sampling to reduce the number of sampled frames per video, or modify DEFAULT_ALL_FRAMES=False.
  • Missing Dependencies or Models: Install required Python packages or download weights according to error messages, manually preparing the checkpoints/ directory if necessary.
  • Evaluation Order: evaluate.py executes in the order specified in --evaluate_subjects. Changes to data_list by one subject are passed to the next.

Hope this guide helps you quickly understand and use the Soul-Bench-Eval framework. For further customization or troubleshooting, please refer to the source code of the corresponding subject and adjust according to project needs.