license: mit
pipeline_tag: image-to-text
library_name: transformers
tags:
- multimodal
- captioning
- controllable
AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning
π€ Model Weights | π AnyCapEval Benchmark | π Paper | π Code
Overview
AnyCap is a unified and controllable omni-modal captioning framework, supporting caption generation for images, audio, and videos with fine-grained style control. It was presented in the paper "AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning".
At its core, AnyCapModel (ACM) is a lightweight plug-and-play framework that enhances the controllability of existing foundation models for omni-modal captioning without retraining the base model. ACM reuses the original captions from base models while incorporating user instructions and modality features to generate improved captions. The project also introduces AnyCapDataset (ACD) to address data scarcity and AnyCapEval for reliable evaluation by decoupling content accuracy and stylistic fidelity.
π© Highlights
- π Unified Multi-modal Captioning: One framework covers image, audio, and video captioning with controllable styles.
- π Customizable Caption Styles: Control caption styles through predefined instructions and models.
- π Open Benchmark & Evaluation: AnyCapEvalβan industry-level, multi-modal benchmark with comprehensive evaluation protocols.
- π οΈ End-to-End Open Source: Full training pipeline, evaluation toolkits, dataset pipeline and open benchmark.
π Quick Start
Installation
To get started with AnyCap, clone the repository and install the dependencies:
git clone https://github.com/qishisuren123/AnyCap.git
cd AnyCap
pip install -r requirements.txt
You may also need to install Fairseq manually:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./
Sample Usage with π€ Transformers
This model can be loaded and used with the transformers library, for example, for image captioning:
from transformers import AutoProcessor, AutoModelForCausalLM
from PIL import Image
import requests
# Load model and processor
model_id = "qishisuren/AnyCapModel" # Replace with specific checkpoint if needed
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
processor = AutoProcessor.from_pretrained(model_id)
# Example image input (replace with your image path or a PIL Image object)
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/image_captioning.png"
image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
# Prepare messages for captioning
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "Describe this image in detail."},
],
}
]
# Apply chat template and process inputs
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=text, images=image, return_tensors="pt").to(model.device)
# Generate caption
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
For more detailed usage, including evaluation and other modalities, please refer to the official GitHub repository.
π Benchmark & Evaluation
AnyCapEval Benchmark
Figure 2 β Evaluation methodology of AnyCapEval.
(a) Examples demonstrating content scoring with Key-point Density (KPD) and style scoring rules.
(b) KPD correlation analysis, showing that KPD lengthβbased metrics achieve the highest Pearson/Spearman/Kendall correlations with human judgments.
(c) Radar chart illustrating the large performance gains delivered by ACM integration across ten dimensions (IAptβThm).
| GPT-4o | GPT-4o + ACM | InternVL2.5-8B | InternVL2.5-8B + ACM | |
|---|---|---|---|---|
| Average β | 2.79 | 4.15 | 2.75 | 3.98 |
Key takeaway β’ ACM boosts GPT-4oβs content scores by +45 % and style scores by +12 %, and yields similar gains on strong open models, highlighting the reliability and coverage of AnyCapEval.
For detailed evaluation scripts and instructions, please refer to the GitHub repository.
π Dataset
AnyCapDataset (Coming Soon)
High-quality, fully annotated datasets for all three modalities (image, audio, video) will be released soon on HuggingFace. Stay tuned!
π€ Contributing
We welcome contributions! Please open issues or submit PRs for feedback and improvements.
π Citation
@misc{ren2025anycapprojectunifiedframework,
title={AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning},
author={Yiming Ren and Zhiqiang Lin and Yu Li and Gao Meng and Weiyun Wang and Junjie Wang and Zicheng Lin and Jifeng Dai and Yujiu Yang and Wenhai Wang and Ruihang Chu},
year={2025},
eprint={2507.12841},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.12841},
}
License
This project is licensed under the MIT License β see the LICENSE file for details.