Datasets:
Soul-Bench-Eval User Guide
This project provides an extensible objective evaluation framework for video generation models. Through a unified entry script evaluate.py, multiple evaluation subjects can be executed in series on the same batch of generation results, outputting a structured JSON report. The following sections detail the project structure, preparation, overall process, and usage points for each evaluation subject.
Directory Structure Overview
Soul-Bench-Eval/
βββ evaluate.py # Main evaluation script, loads data and runs subjects sequentially
βββ parallel_evaluate.py # Script for parallel execution across multiple GPUs
βββ merge_results.py # Script for merging results from parallel execution
βββ calculate_average.py # Script for calculating average scores from JSON reports
βββ utils.py # Video/image I/O and common utility functions
βββ video.py # `VideoData` data structure definition
βββ subjects/ # Concrete evaluation subject implementations
βββ arcface_consistency.py
βββ av_align.py
βββ latent_sync.py
βββ qwen_vl_vllm.py
βββ video_quality.py
Core Process:
evaluate.pyconstructs a list ofVideoDatabased on the specified input/output directories.- Imports modules from the
subjects/directory based on the--evaluate_subjectsargument and calls theirevaluatefunctions. - Each subject writes evaluation results to
VideoData(register_result). - All results are finally saved as
evaluation_results.json.
The VideoData object automatically associates reference images (.png), audio (.wav), and text (.txt) with the same name as the video, and provides convenient methods like has_image/has_audio/has_text for subjects to call.
Environment Preparation
- Python 3.10 and above (recommended version compatible with dependencies).
- Common dependencies (can be installed via
pip install -r requirements.txt):torch,torchvisiontransformers,tqdm,numpydecord,opencv-python,pillow
- Each subject may require additional third-party libraries or model weights, see below for details.
Quick Start
Prepare Evaluation Data
Your data should follow this structure:
data_example/ βββ soul-input/ # Reference materials β βββ 0016_human_talk_en_female.json β βββ 0016_human_talk_en_female.png β βββ 0016_human_talk_en_female.wav β βββ ... βββ soul-results/ # Generated videos βββ InfiniteTalk/ # Model name (optional subdirectory) βββ 0016_human_talk_en_female.mp4 βββ ...--model_input_dir: Folder containing reference materials, including.png(reference image),.wav(reference audio),.txtor.json(text prompt) with the same name as the video. Can be omitted if not needed.--model_output_dir: Video results generated by the model to be evaluated, extension must be.mp4.
Prepare Environment
Different evaluation subjects require different pre-trained models or third-party repositories:
Identity Consistence (ArcFace)
- Reference: InsightFace
- Setup:
pip install insightface # Models will be auto-downloaded on first run to ~/.insightface/models/
LSE-D/LSE-C (Latent Sync)
- Reference: LatentSync
- Setup:
# Clone repository git clone https://github.com/bytedance/LatentSync.git third_party/LatentSync cd third_party/LatentSync # Install dependencies source setup_env.sh # Models will be auto-downloaded from HuggingFace (ByteDance/LatentSync-1.5)
Audio-Video Alignment (AV-Align)
- Reference: Built-in implementation (no external repo needed)
- Setup:
pip install opencv-python librosa numpy
Video-Text Consistence (Qwen-VL)
- Reference: Qwen3-VL / vLLM
- Setup:
pip install vllm # Start vLLM server python -m vllm.entrypoints.openai.api_server \ --model Qwen/Qwen3-VL-235B-A22B-Thinking \ --served-model-name qwen3-vl \ --trust-remote-code # Then pass API endpoint in model_args when running evaluation
Video Quality (FineVQ)
- Reference: FineVQ
- Setup:
# Clone repository git clone https://github.com/IntMeGroup/FineVQ.git third_party/FineVQ cd third_party/FineVQ # Install dependencies pip install -r requirements.txt pip install flash-attn==2.3.6 --no-build-isolation # Download model weights as per FineVQ README (Inference section) huggingface-cli download IntMeGroup/FineVQ_score --local-dir ./IntMeGroup/FineVQ_score
Select Evaluation Subjects
- Separated by commas, e.g.,
--evaluate_subjects arcface_consistency,av_align --model_argscan provide JSON strings for each subject in the same quantity to pass additional configurations.
- Separated by commas, e.g.,
Run Example
# Using the example data structure above python evaluate.py \ --model_input_dir data_example/soul-input \ --model_output_dir data_example/soul-results/InfiniteTalk \ --results_dir ./evaluation_results \ --evaluate_subjects arcface_consistency,av_align \ --model_args '{}','{}' # Or with custom paths python evaluate.py \ --model_input_dir /path/to/inputs \ --model_output_dir /path/to/outputs \ --results_dir ./evaluation_results \ --evaluate_subjects arcface_consistency,av_align \ --model_args '{}','{}'Common arguments:
--device: Defaultcuda, some subjects supportcpu.--batch_size: Batch size for processing frames, default16.--sampling: Number of frames to sample per video. Default samples 16 frames; changeDEFAULT_ALL_FRAMEStoTrueto use all frames by default.
View Results
- Results for all
VideoDatawill be written toresults_dir/evaluation_results.json. - Each entry in the JSON contains the video path and the evaluation results for each subject.
- Results for all
Advanced Usage
Parallel Evaluation
For large datasets, you can use parallel_evaluate.py to distribute tasks across multiple GPUs.
# Use 8 GPUs, split into 8 groups
python parallel_evaluate.py \
--model_input_dir ./inputs \
--model_output_dir ./outputs \
--evaluate_subjects arcface_consistency \
--group_total 8 \
--num_gpus 8
# Use 4 GPUs, split into 8 groups (2 tasks per GPU)
python parallel_evaluate.py \
--model_input_dir ./inputs \
--model_output_dir ./outputs \
--evaluate_subjects arcface_consistency \
--group_total 8 \
--num_gpus 4 \
--parallelism 4
Result Merging
If you ran evaluations in parallel (or manually split them), use merge_results.py to combine the JSON files.
# Automatically detect and merge results in a directory
python merge_results.py --results_dir ./evaluation_results --subjects arcface_consistency --group_total 8
# Manually specify files to merge
python merge_results.py --input_files result1.json result2.json --output merged.json
Result Analysis
Use calculate_average.py to compute average scores for all metrics in the results file.
# Calculate averages
python calculate_average.py evaluation_results/evaluation_results.json
# Show detailed stats (min, max, etc.)
python calculate_average.py evaluation_results/evaluation_results.json -v
Evaluation Subjects Details
1. Identity Consistence (ArcFace)
- Function: Uses the InsightFace ArcFace model to measure the similarity between faces in the video and the reference image face.
- Dependencies:
insightface,opencv-python,torch(GPU support required for faster detection). - Key Parameters:
name: InsightFace model combination, defaultbuffalo_l.det_size: Face detection input size, default(640, 640).
- Usage Requirements:
- Must provide the corresponding reference face image (
.png). - If no face is detected, an exception will be thrown or the frame will be skipped.
- Results include
arcface_consistencymean and frame-by-frame similarity list.
- Must provide the corresponding reference face image (
- Usage Example:
python evaluate.py \ --model_input_dir /path/to/inputs \ --model_output_dir /path/to/outputs \ --evaluate_subjects arcface_consistency \ --model_args '{"name": "buffalo_l", "det_size": [640, 640]}'
2. LSE-D/LSE-C (Latent Sync)
- Function: Integrates ByteDance's open-source LatentSync project to evaluate lip-sync and speech synchronization.
- Dependencies:
- System requires
git,ffmpeg(LatentSync runtime requirements). - Python packages:
huggingface_hub(auto download weights), and LatentSync's own dependencies (install according to its README after cloning).
- System requires
- Key Parameters:
repo_dir: LatentSync code location, defaultthird_party/LatentSync.force_clone: Force re-clone iftrue.min_track: Minimum number of frames for face tracking, default50.syncnet_checkpoint,huggingface_repo_id: Custom weight path or source.subject_name: Custom key name for writing results, defaultlatent_sync.
- Output: Each video contains fields like
confidence,av_offset,num_crops. Recordserrorif failed. - Usage Example:
python evaluate.py \ --model_input_dir /path/to/inputs \ --model_output_dir /path/to/outputs \ --evaluate_subjects latent_sync \ --model_args '{"min_track": 50}'
3. Audio-Video Alignment (AV-Align)
- Function: Uses the AV-Align metric to evaluate the alignment between audio and video modalities. By detecting audio peaks and video peaks (based on optical flow), it calculates their Intersection over Union (IoU) to quantify synchronization. Higher IoU scores indicate better alignment.
- Dependencies:
- Python packages:
opencv-python(cv2),librosa,numpy. - System tools:
ffmpeg(used to extract audio from video if not provided separately).
- Python packages:
- Key Parameters (passed via
--model_argsJSON):subject_name: Custom key name for writing results, defaultav_align.downsample: Frame downsampling factor, default2. Used to accelerate calculation:1: No downsampling (slowest, most accurate)2: Process every other frame (2x speed, recommended)4: Process every 4th frame (4x speed)
- Input Requirements:
- Video file (
.mp4) required. - If corresponding audio file (
.wav) is provided, it will be used directly; otherwise, audio will be extracted from the video automatically.
- Video file (
- Output Fields:
iou_score: Audio-video alignment IoU score (between 0-1, higher is better).num_audio_peaks: Number of detected audio peaks.num_video_peaks: Number of detected video peaks.fps: Effective frame rate of the video (considering downsampling).downsample_factor: Actual downsampling factor used.- Records
errorfield if evaluation fails.
- Performance Optimization:
- β Vectorized calculation: Use NumPy arrays instead of Python lists
- β Direct grayscale extraction: Avoid repeated BGR->Grayscale conversion
- β Pre-allocated memory: Reduce dynamic expansion overhead
- β Frame downsampling: Optional frame skipping for acceleration (default 2x)
- β Optimized peak detection: Vectorized local maxima finding
- Technical Details:
- Audio peak detection: Uses Onset Detection algorithm (librosa).
- Video peak detection: Uses Farneback optical flow to calculate inter-frame motion, local maxima are video peaks (filtering out static scenes with magnitude < 0.1).
- IoU calculation: Each video peak matches at most one audio peak, matching window is Β±1 frame.
- Usage Example:
# Default settings (2x speed) python evaluate.py \ --model_input_dir /path/to/inputs \ --model_output_dir /path/to/outputs \ --evaluate_subjects av_align \ --model_args '{}' # Highest precision (no downsampling, slower) python evaluate.py \ --model_input_dir /path/to/inputs \ --model_output_dir /path/to/outputs \ --evaluate_subjects av_align \ --model_args '{"downsample": 1}' # Fast mode (4x speed) python evaluate.py \ --model_input_dir /path/to/inputs \ --model_output_dir /path/to/outputs \ --evaluate_subjects av_align \ --model_args '{"downsample": 4}'
4. Video-Text Consistence (Qwen-VL)
- Function: Calls multimodal large models (default Qwen3-VL) via vLLM to read videos and answer custom prompt questions, recording model output.
- Dependencies:
vllmand its hardware dependencies (CUDA environment, corresponding VRAM requirements). - Key Parameters:
model_name: Defaultqwen3-vl.prompt_templateorprompt_template_path: Template for generating text prompts, supportsstr.formatplaceholders.template_variables: Optional default variables for the template.
- Input Requirements: Video paths must be visible to vLLM (script defaults to allowing local file paths
file://). - Usage Example:
python evaluate.py \ --model_input_dir /path/to/inputs \ --model_output_dir /path/to/outputs \ --evaluate_subjects qwen_vl_vllm \ --model_args '{"model_name": "qwen3-vl", "prompt_template": "./video_eval.txt"}'
5. Video Quality (FineVQ)
- Function: Automatically calls the FineVQ project to score video visual quality.
- Dependencies:
- System requires
git,ffmpeg(FineVQ dependency environment). - Python packages: Dependencies in the FineVQ repository will be installed on demand at runtime.
- System requires
- Key Parameters:
repo_dir: FineVQ repository storage path, defaultthird_party/FineVQ.repo_url: Repository address, default official repository.install_dependencies: Whether to install dependencies on first run, defaulttrue.force_clone: Force re-clone repository.per_device_batch_size,batch_size,nproc_per_node: Control batch size and process count for distributed inference.use_bf16: Whether to enable bfloat16, defaulttrue.env: Additional environment variables (e.g.,CUDA_VISIBLE_DEVICES).extra_args: Append or override FineVQ CLI arguments as a list.
- Execution Flow:
- Symlink target videos to a temporary directory and generate
meta.jsonrequired by FineVQ. - Call FineVQ official
torch.distributed.runinference script. - Parse output CSV and metric files, and write to
video_qualityresult field.
- Symlink target videos to a temporary directory and generate
- Output Fields:
video_quality_score(usually corresponds topred_score), detailed raw metrics, and optional overall metricsaggregate_metrics. - Usage Example:
python evaluate.py \ --model_input_dir /path/to/inputs \ --model_output_dir /path/to/outputs \ --evaluate_subjects video_quality \ --model_args '{"use_bf16": true}'
Custom Extension
- To add a new evaluation subject, create a Python file with the same name in
subjects/and export theevaluate(data_list, device, batch_size, sampling, model_args)function. - Custom results for each
VideoDataare written viaregister_result(subject_name, payload)and will eventually appear in the summary JSON.
FAQ
- High Memory Usage: Adjust
--samplingto reduce the number of sampled frames per video, or modifyDEFAULT_ALL_FRAMES=False. - Missing Dependencies or Models: Install required Python packages or download weights according to error messages, manually preparing the
checkpoints/directory if necessary. - Evaluation Order:
evaluate.pyexecutes in the order specified in--evaluate_subjects. Changes todata_listby one subject are passed to the next.
Hope this guide helps you quickly understand and use the Soul-Bench-Eval framework. For further customization or troubleshooting, please refer to the source code of the corresponding subject and adjust according to project needs.