|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
pipeline_tag: image-text-to-text |
|
|
tags: |
|
|
- multimodal |
|
|
- video-caption-evaluation |
|
|
- reference-free |
|
|
- factual-analysis |
|
|
- vision-language |
|
|
library_name: transformers |
|
|
base_model: Qwen/Qwen2.5-VL-3B-Instruct |
|
|
datasets: |
|
|
- dipta007/ActivityNet-FG-It |
|
|
arxiv: 2509.16538 |
|
|
--- |
|
|
|
|
|
# VC-Inspector-3B |
|
|
|
|
|
<a href="https://arxiv.org/abs/2509.16538" target="_blank"> |
|
|
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-2509.16538-b31b1b.svg" style="display: inline-block; vertical-align: middle;"/> |
|
|
</a> |
|
|
<a href="https://huggingface.co/collections/dipta007/vc-inspector" target="_blank"> |
|
|
<img alt="Models" src="https://img.shields.io/badge/HuggingFace-Models-orange" style="display: inline-block; vertical-align: middle;"/> |
|
|
</a> |
|
|
<a href="https://huggingface.co/datasets/dipta007/ActivityNet-FG-It" target="_blank"> |
|
|
<img alt="Dataset" src="https://img.shields.io/badge/HuggingFace-Dataset-blue" style="display: inline-block; vertical-align: middle;"/> |
|
|
</a> |
|
|
|
|
|
## Introduction |
|
|
|
|
|
**VC-Inspector-3B** is a lightweight, open-source large multimodal model (LMM) for **reference-free evaluation of video captions** with a focus on **factual accuracy**. This is the smaller, more efficient variant of VC-Inspector, ideal for resource-constrained environments while still achieving strong performance. |
|
|
|
|
|
Unlike existing metrics that suffer from limited context handling, weak factuality assessment, or reliance on proprietary services, VC-Inspector offers a reproducible, fact-aware alternative that aligns closely with human judgments. |
|
|
|
|
|
This model is fine-tuned from [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) using LoRA on our synthetic dataset [ActivityNet-FG-It](https://huggingface.co/datasets/dipta007/ActivityNet-FG-It), which contains 44K video-caption pairs with controlled factual errors and quality annotations. |
|
|
|
|
|
### Key Features |
|
|
|
|
|
- **Lightweight**: Only 3B parameters - suitable for on-device or resource-constrained deployment |
|
|
- **Reference-free Evaluation**: Evaluates video captions without requiring ground-truth references |
|
|
- **Factual Grounding**: Detects factual errors in objects and actions within captions |
|
|
- **Interpretable Outputs**: Generates quality scores (1-5) with natural language explanations |
|
|
- **Cross-domain Generalization**: Works on both video and image caption evaluation |
|
|
- **Fast Inference**: 0.30 seconds per video clip on A100 GPU |
|
|
|
|
|
### Model Architecture |
|
|
|
|
|
VC-Inspector-3B is built on Qwen2.5-VL-3B-Instruct with the following modifications: |
|
|
- **Vision Encoder**: Frozen (preserves generalization) |
|
|
- **Visual-Language Projector**: Frozen |
|
|
- **LLM Component**: Fine-tuned with LoRA (rank=32, alpha=32) |
|
|
|
|
|
## Evaluation Results |
|
|
|
|
|
### Correlation with Human Judgments on VATEX-Eval |
|
|
|
|
|
| Metric | Type | Kendall's τ_b | Spearman's ρ | |
|
|
|:-------|:-----|:-------------:|:------------:| |
|
|
| EMScore | Reference-free | 22.88 | 29.79 | |
|
|
| CLIPScore | Reference-free | 22.33 | 29.09 | |
|
|
| ViCLIPScore | Reference-free | 30.92 | 39.86 | |
|
|
| Qwen2.5-VL-3B (base) | Reference-free | 31.29 | 36.43 | |
|
|
| G-VEval (GPT-4o) | Reference-free | 39.40 | - | |
|
|
| **VC-Inspector-3B** | Reference-free | **37.99** | **42.45** | |
|
|
|
|
|
### Cross-domain Evaluation on Image Caption Benchmarks |
|
|
|
|
|
| Metric | Flickr8K-Expert (τ_b) | Flickr8K-CF (τ_b) | |
|
|
|:-------|:---------------------:|:-----------------:| |
|
|
| CLIPScore (ref-free) | 51.10 | 34.40 | |
|
|
| PAC-S (ref-free) | 53.90 | 36.00 | |
|
|
| **VC-Inspector-3B** | **59.86** | **39.00** | |
|
|
|
|
|
### Synthetic Dataset Evaluation |
|
|
|
|
|
| Dataset | Kendall's τ_b | Spearman's ρ | |
|
|
|:--------|:-------------:|:------------:| |
|
|
| ActivityNet-FG-Eval | 49.53 | 62.01 | |
|
|
| YouCook2-FG-Eval | 44.29 | 55.31 | |
|
|
|
|
|
### Computational Efficiency |
|
|
|
|
|
| Metric | Time per clip (A100) | |
|
|
|:-------|:--------------------:| |
|
|
| EMScore | 0.42s | |
|
|
| ViCLIPScore | 0.34s | |
|
|
| **VC-Inspector-3B** | **0.30s** | |
|
|
|
|
|
## Requirements |
|
|
|
|
|
```bash |
|
|
pip install torch transformers accelerate |
|
|
pip install qwen-vl-utils[decord]==0.0.8 |
|
|
pip install flash-attn --no-build-isolation |
|
|
``` |
|
|
|
|
|
## Quickstart |
|
|
|
|
|
### Using Transformers |
|
|
|
|
|
```python |
|
|
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor |
|
|
from qwen_vl_utils import process_vision_info |
|
|
|
|
|
# Load model |
|
|
model = Qwen2_5_VLForConditionalGeneration.from_pretrained( |
|
|
"dipta007/VCInspector-3B", |
|
|
torch_dtype="auto", |
|
|
device_map="auto", |
|
|
) |
|
|
processor = AutoProcessor.from_pretrained("dipta007/VCInspector-3B") |
|
|
|
|
|
# Prepare input |
|
|
caption = "A man is playing guitar in a field" |
|
|
prompt = f"""<caption>{caption}</caption> |
|
|
|
|
|
You are given a video and a caption describing the video content. Please rate the helpfulness, relevance, accuracy, level of details of the caption. The overall score should be on a scale of 1 to 5, where a higher score indicates better overall performance. Please first output a single line containing only one integer indicating the score. In the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias. STRICTLY FOLLOW THE FORMAT.""" |
|
|
|
|
|
messages = [ |
|
|
{ |
|
|
"role": "user", |
|
|
"content": [ |
|
|
{"type": "video", "video": "path/to/video.mp4", "max_pixels": 360 * 420, "fps": 1.0}, |
|
|
{"type": "text", "text": prompt}, |
|
|
], |
|
|
} |
|
|
] |
|
|
|
|
|
# Process and generate |
|
|
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
|
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True) |
|
|
inputs = processor( |
|
|
text=[text], |
|
|
images=image_inputs, |
|
|
videos=video_inputs, |
|
|
padding=True, |
|
|
return_tensors="pt", |
|
|
**video_kwargs, |
|
|
) |
|
|
inputs = inputs.to("cuda") |
|
|
|
|
|
generated_ids = model.generate(**inputs, max_new_tokens=256) |
|
|
generated_ids_trimmed = [ |
|
|
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) |
|
|
] |
|
|
output_text = processor.batch_decode( |
|
|
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False |
|
|
) |
|
|
print(output_text[0]) |
|
|
``` |
|
|
|
|
|
### Example Output |
|
|
|
|
|
``` |
|
|
4 |
|
|
The caption does not accurately capture the video content. For example, the objects (guitar) are incorrect. |
|
|
``` |
|
|
|
|
|
### Using with ms-swift (vLLM backend) |
|
|
|
|
|
```python |
|
|
from swift.llm import VllmEngine, InferRequest, RequestConfig |
|
|
import os |
|
|
|
|
|
os.environ["VIDEO_MAX_PIXELS"] = "50176" |
|
|
os.environ["FPS_MAX_FRAMES"] = "12" |
|
|
|
|
|
engine = VllmEngine( |
|
|
"dipta007/VCInspector-3B", |
|
|
max_model_len=32768, |
|
|
limit_mm_per_prompt={"image": 32} |
|
|
) |
|
|
|
|
|
# Prepare request |
|
|
request = InferRequest( |
|
|
messages=[{"role": "user", "content": f"<image>\n{prompt}"}], |
|
|
images=["frame1.jpg", "frame2.jpg", ...] # Video frames |
|
|
) |
|
|
config = RequestConfig(max_tokens=256, temperature=0.0) |
|
|
response = engine.infer([request], config) |
|
|
print(response[0].choices[0].message.content) |
|
|
``` |
|
|
|
|
|
## Output Format |
|
|
|
|
|
VC-Inspector outputs two components: |
|
|
|
|
|
1. **Quality Score** (Line 1): Integer from 1-5 |
|
|
- 5: Caption is accurate and comprehensive |
|
|
- 4: Minor factual errors |
|
|
- 3: Moderate factual errors |
|
|
- 2: Significant factual errors |
|
|
- 1: Major factual errors or completely incorrect |
|
|
|
|
|
2. **Explanation** (Line 2+): Natural language explanation identifying: |
|
|
- Incorrect objects (e.g., "guitar" instead of "violin") |
|
|
- Incorrect actions (e.g., "running" instead of "walking") |
|
|
|
|
|
## Training Details |
|
|
|
|
|
| Hyperparameter | Value | |
|
|
|:---------------|:------| |
|
|
| Base Model | Qwen2.5-VL-3B-Instruct | |
|
|
| Training Data | ActivityNet-FG-It (44K samples) | |
|
|
| Epochs | 1 | |
|
|
| Global Batch Size | 128 | |
|
|
| Learning Rate | 1e-4 | |
|
|
| LR Scheduler | Cosine (min: 1e-5) | |
|
|
| LoRA Rank | 32 | |
|
|
| LoRA Alpha | 32 | |
|
|
| LoRA Dropout | 0.05 | |
|
|
| Number of Frames | 32 | |
|
|
| Training Time | ~32 GPU hours (A100) | |
|
|
|
|
|
## Ablation Studies |
|
|
|
|
|
### Impact of Explanation Supervision |
|
|
|
|
|
| Setting | Kendall's τ_b | Spearman's ρ | |
|
|
|:--------|:-------------:|:------------:| |
|
|
| Without Explanations | 34.29 | 38.18 | |
|
|
| **With Explanations** | **37.99** | **42.45** | |
|
|
|
|
|
### Data Synthesis Strategy |
|
|
|
|
|
| Strategy | Kendall's τ_b | Spearman's ρ | |
|
|
|:---------|:-------------:|:------------:| |
|
|
| Change objects only | 36.40 | 41.20 | |
|
|
| Change actions only | 33.23 | 39.63 | |
|
|
| **Change both (Ours)** | **37.99** | **42.45** | |
|
|
|
|
|
## When to Use VC-Inspector-3B vs 7B |
|
|
|
|
|
| Use Case | Recommended Model | |
|
|
|:---------|:------------------| |
|
|
| Resource-constrained environments | **3B** | |
|
|
| On-device deployment | **3B** | |
|
|
| Batch processing large datasets | **3B** | |
|
|
| Maximum accuracy required | 7B | |
|
|
| Research benchmarking | 7B | |
|
|
|
|
|
## Limitations |
|
|
|
|
|
- Primarily targets object and action correctness; attributes, spatial relationships, and fine-grained temporal ordering are not explicitly modeled |
|
|
- Training relies on synthetically generated captions and pseudo-scores |
|
|
- Slightly lower performance than the 7B variant on challenging cases |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find this work useful, please cite our paper: |
|
|
|
|
|
```bibtex |
|
|
@misc{dipta2025advancingreferencefreeevaluationvideo, |
|
|
title={Advancing Reference-free Evaluation of Video Captions with Factual Analysis}, |
|
|
author={Shubhashis Roy Dipta and Tz-Ying Wu and Subarna Tripathi}, |
|
|
year={2025}, |
|
|
eprint={2509.16538}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2509.16538}, |
|
|
} |
|
|
``` |
|
|
|
|
|
## Acknowledgements |
|
|
|
|
|
This work builds upon [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) and uses [ms-swift](https://github.com/modelscope/ms-swift) for training. |
|
|
|