File size: 8,183 Bytes
d063cdb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 |
---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- video-caption-evaluation
- reference-free
- factual-analysis
- vision-language
library_name: transformers
base_model: Qwen/Qwen2.5-VL-7B-Instruct
datasets:
- dipta007/ActivityNet-FG-It
arxiv: 2509.16538
---
# VC-Inspector-7B
<a href="https://arxiv.org/abs/2509.16538" target="_blank">
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-2509.16538-b31b1b.svg" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/collections/dipta007/vc-inspector" target="_blank">
<img alt="Models" src="https://img.shields.io/badge/HuggingFace-Models-orange" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/datasets/dipta007/ActivityNet-FG-It" target="_blank">
<img alt="Dataset" src="https://img.shields.io/badge/HuggingFace-Dataset-blue" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
**VC-Inspector-7B** is a lightweight, open-source large multimodal model (LMM) for **reference-free evaluation of video captions** with a focus on **factual accuracy**. Unlike existing metrics that suffer from limited context handling, weak factuality assessment, or reliance on proprietary services, VC-Inspector offers a reproducible, fact-aware alternative that aligns closely with human judgments.
This model is fine-tuned from [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) using LoRA on our synthetic dataset [ActivityNet-FG-It](https://huggingface.co/datasets/dipta007/ActivityNet-FG-It), which contains 44K video-caption pairs with controlled factual errors and quality annotations.
### Key Features
- **Reference-free Evaluation**: Evaluates video captions without requiring ground-truth references
- **Factual Grounding**: Detects factual errors in objects and actions within captions
- **Interpretable Outputs**: Generates quality scores (1-5) with natural language explanations
- **Cross-domain Generalization**: Works on both video and image caption evaluation
- **State-of-the-art Performance**: Outperforms GPT-4o-based methods on VATEX-Eval
### Model Architecture
VC-Inspector-7B is built on Qwen2.5-VL-7B-Instruct with the following modifications:
- **Vision Encoder**: Frozen (preserves generalization)
- **Visual-Language Projector**: Frozen
- **LLM Component**: Fine-tuned with LoRA (rank=32, alpha=32)
## Evaluation Results
### Correlation with Human Judgments on VATEX-Eval
| Metric | Type | Kendall's τ_b | Spearman's ρ |
|:-------|:-----|:-------------:|:------------:|
| EMScore | Reference-free | 22.88 | 29.79 |
| CLIPScore | Reference-free | 22.33 | 29.09 |
| ViCLIPScore | Reference-free | 30.92 | 39.86 |
| G-VEval (GPT-4o) | Reference-free | 39.40 | - |
| Qwen2.5-VL-7B (base) | Reference-free | 34.70 | 39.40 |
| **VC-Inspector-7B** | Reference-free | **42.58** | **45.99** |
### Cross-domain Evaluation on Image Caption Benchmarks
| Metric | Flickr8K-Expert (τ_b) | Flickr8K-CF (τ_b) |
|:-------|:---------------------:|:-----------------:|
| CLIPScore (ref-free) | 51.10 | 34.40 |
| PAC-S (ref-free) | 53.90 | 36.00 |
| **VC-Inspector-7B** | **63.43** | **45.97** |
### Synthetic Dataset Evaluation
| Dataset | Kendall's τ_b | Spearman's ρ |
|:--------|:-------------:|:------------:|
| ActivityNet-FG-Eval | 49.53 | 62.01 |
| YouCook2-FG-Eval | 44.29 | 55.31 |
## Requirements
```bash
pip install torch transformers accelerate
pip install qwen-vl-utils[decord]==0.0.8
pip install flash-attn --no-build-isolation
```
## Quickstart
### Using Transformers
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
# Load model
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"dipta007/VCInspector-7B",
torch_dtype="auto",
device_map="auto",
)
processor = AutoProcessor.from_pretrained("dipta007/VCInspector-7B")
# Prepare input
caption = "A man is playing guitar in a field"
prompt = f"""<caption>{caption}</caption>
You are given a video and a caption describing the video content. Please rate the helpfulness, relevance, accuracy, level of details of the caption. The overall score should be on a scale of 1 to 5, where a higher score indicates better overall performance. Please first output a single line containing only one integer indicating the score. In the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias. STRICTLY FOLLOW THE FORMAT."""
messages = [
{
"role": "user",
"content": [
{"type": "video", "video": "path/to/video.mp4", "max_pixels": 360 * 420, "fps": 1.0},
{"type": "text", "text": prompt},
],
}
]
# Process and generate
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=256)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
```
### Example Output
```
4
The caption does not accurately capture the video content. For example, the objects (guitar) are incorrect.
```
### Using with ms-swift (vLLM backend)
```python
from swift.llm import VllmEngine, InferRequest, RequestConfig
import os
os.environ["VIDEO_MAX_PIXELS"] = "50176"
os.environ["FPS_MAX_FRAMES"] = "12"
engine = VllmEngine(
"dipta007/VCInspector-7B",
max_model_len=32768,
limit_mm_per_prompt={"image": 32}
)
# Prepare request
request = InferRequest(
messages=[{"role": "user", "content": f"<image>\n{prompt}"}],
images=["frame1.jpg", "frame2.jpg", ...] # Video frames
)
config = RequestConfig(max_tokens=256, temperature=0.0)
response = engine.infer([request], config)
print(response[0].choices[0].message.content)
```
## Output Format
VC-Inspector outputs two components:
1. **Quality Score** (Line 1): Integer from 1-5
- 5: Caption is accurate and comprehensive
- 4: Minor factual errors
- 3: Moderate factual errors
- 2: Significant factual errors
- 1: Major factual errors or completely incorrect
2. **Explanation** (Line 2+): Natural language explanation identifying:
- Incorrect objects (e.g., "guitar" instead of "violin")
- Incorrect actions (e.g., "running" instead of "walking")
## Training Details
| Hyperparameter | Value |
|:---------------|:------|
| Base Model | Qwen2.5-VL-7B-Instruct |
| Training Data | ActivityNet-FG-It (44K samples) |
| Epochs | 1 |
| Global Batch Size | 128 |
| Learning Rate | 1e-4 |
| LR Scheduler | Cosine (min: 1e-5) |
| LoRA Rank | 32 |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.05 |
| Number of Frames | 32 |
| Training Time | ~32 GPU hours (A100) |
## Limitations
- Primarily targets object and action correctness; attributes, spatial relationships, and fine-grained temporal ordering are not explicitly modeled
- Training relies on synthetically generated captions and pseudo-scores
- Higher computational cost compared to embedding-based metrics (though more lightweight than GPT-4o)
## Citation
If you find this work useful, please cite our paper:
```bibtex
@misc{dipta2025advancingreferencefreeevaluationvideo,
title={Advancing Reference-free Evaluation of Video Captions with Factual Analysis},
author={Shubhashis Roy Dipta and Tz-Ying Wu and Subarna Tripathi},
year={2025},
eprint={2509.16538},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.16538},
}
```
## Acknowledgements
This work builds upon [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) and uses [ms-swift](https://github.com/modelscope/ms-swift) for training.
|