AnyCapModel / README.md
nielsr's picture
nielsr HF Staff
Add comprehensive model card for AnyCap Project
8f6e0f2 verified
|
raw
history blame
6.25 kB
metadata
license: mit
pipeline_tag: image-to-text
library_name: transformers
tags:
  - multimodal
  - captioning
  - controllable

AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning

πŸ€— Model Weights  |  πŸ“Š AnyCapEval Benchmark  |  πŸ“ Paper  |  πŸ“š Code


Overview

AnyCap is a unified and controllable omni-modal captioning framework, supporting caption generation for images, audio, and videos with fine-grained style control. It was presented in the paper "AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning".

At its core, AnyCapModel (ACM) is a lightweight plug-and-play framework that enhances the controllability of existing foundation models for omni-modal captioning without retraining the base model. ACM reuses the original captions from base models while incorporating user instructions and modality features to generate improved captions. The project also introduces AnyCapDataset (ACD) to address data scarcity and AnyCapEval for reliable evaluation by decoupling content accuracy and stylistic fidelity.


🚩 Highlights

  • πŸ† Unified Multi-modal Captioning: One framework covers image, audio, and video captioning with controllable styles.
  • πŸ“ Customizable Caption Styles: Control caption styles through predefined instructions and models.
  • πŸ“Š Open Benchmark & Evaluation: AnyCapEvalβ€”an industry-level, multi-modal benchmark with comprehensive evaluation protocols.
  • πŸ› οΈ End-to-End Open Source: Full training pipeline, evaluation toolkits, dataset pipeline and open benchmark.

πŸš€ Quick Start

Installation

To get started with AnyCap, clone the repository and install the dependencies:

git clone https://github.com/qishisuren123/AnyCap.git
cd AnyCap
pip install -r requirements.txt

You may also need to install Fairseq manually:

git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

Sample Usage with πŸ€— Transformers

This model can be loaded and used with the transformers library, for example, for image captioning:

from transformers import AutoProcessor, AutoModelForCausalLM
from PIL import Image
import requests

# Load model and processor
model_id = "qishisuren/AnyCapModel" # Replace with specific checkpoint if needed
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
processor = AutoProcessor.from_pretrained(model_id)

# Example image input (replace with your image path or a PIL Image object)
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/image_captioning.png"
image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")

# Prepare messages for captioning
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {"type": "text", "text": "Describe this image in detail."},
        ],
    }
]

# Apply chat template and process inputs
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(text=text, images=image, return_tensors="pt").to(model.device)

# Generate caption
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(generated_text)

For more detailed usage, including evaluation and other modalities, please refer to the official GitHub repository.


πŸ“Š Benchmark & Evaluation

AnyCapEval Benchmark

Figure 2 – Evaluation methodology of AnyCapEval.
(a) Examples demonstrating content scoring with Key-point Density (KPD) and style scoring rules.
(b) KPD correlation analysis, showing that KPD length‐based metrics achieve the highest Pearson/Spearman/Kendall correlations with human judgments.
(c) Radar chart illustrating the large performance gains delivered by ACM integration across ten dimensions (IApt–Thm).

GPT-4o GPT-4o + ACM InternVL2.5-8B InternVL2.5-8B + ACM
Average ↑ 2.79 4.15 2.75 3.98

Key takeaway β€’ ACM boosts GPT-4o’s content scores by +45 % and style scores by +12 %, and yields similar gains on strong open models, highlighting the reliability and coverage of AnyCapEval.

For detailed evaluation scripts and instructions, please refer to the GitHub repository.


πŸ“‚ Dataset

AnyCapDataset (Coming Soon)

High-quality, fully annotated datasets for all three modalities (image, audio, video) will be released soon on HuggingFace. Stay tuned!


🀝 Contributing

We welcome contributions! Please open issues or submit PRs for feedback and improvements.


πŸ“ Citation

@misc{ren2025anycapprojectunifiedframework,
      title={AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning}, 
      author={Yiming Ren and Zhiqiang Lin and Yu Li and Gao Meng and Weiyun Wang and Junjie Wang and Zicheng Lin and Jifeng Dai and Yujiu Yang and Wenhai Wang and Ruihang Chu},
      year={2025},
      eprint={2507.12841},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.12841}, 
}

License

This project is licensed under the MIT License – see the LICENSE file for details.