--- license: mit pipeline_tag: image-to-text library_name: transformers tags: - multimodal - captioning - controllable --- # AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning

๐Ÿค— Model Weights  |  ๐Ÿ“Š AnyCapEval Benchmark  |  ๐Ÿ“ Paper  |  ๐Ÿ“š Code

--- ## Overview **AnyCap** is a unified and controllable omni-modal captioning framework, supporting caption generation for images, audio, and videos with fine-grained style control. It was presented in the paper "[AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning](https://huggingface.co/papers/2507.12841)". At its core, **AnyCapModel (ACM)** is a lightweight plug-and-play framework that enhances the controllability of existing foundation models for omni-modal captioning without retraining the base model. ACM reuses the original captions from base models while incorporating user instructions and modality features to generate improved captions. The project also introduces **AnyCapDataset (ACD)** to address data scarcity and **AnyCapEval** for reliable evaluation by decoupling content accuracy and stylistic fidelity. --- ## ๐Ÿšฉ Highlights - ๐Ÿ† **Unified Multi-modal Captioning:** One framework covers image, audio, and video captioning with controllable styles. - ๐Ÿ“ **Customizable Caption Styles**: Control caption styles through predefined instructions and models. - ๐Ÿ“Š **Open Benchmark & Evaluation:** AnyCapEvalโ€”an industry-level, multi-modal benchmark with comprehensive evaluation protocols. - ๐Ÿ› ๏ธ **End-to-End Open Source:** Full training pipeline, evaluation toolkits, dataset pipeline and open benchmark. --- ## ๐Ÿš€ Quick Start ### Installation To get started with AnyCap, clone the repository and install the dependencies: ```bash git clone https://github.com/qishisuren123/AnyCap.git cd AnyCap pip install -r requirements.txt ``` You may also need to install Fairseq manually: ```bash git clone https://github.com/pytorch/fairseq cd fairseq pip install --editable ./ ``` ### Sample Usage with ๐Ÿค— Transformers This model can be loaded and used with the `transformers` library, for example, for image captioning: ```python from transformers import AutoProcessor, AutoModelForCausalLM from PIL import Image import requests # Load model and processor model_id = "qishisuren/AnyCapModel" # Replace with specific checkpoint if needed model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto") processor = AutoProcessor.from_pretrained(model_id) # Example image input (replace with your image path or a PIL Image object) image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/image_captioning.png" image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB") # Prepare messages for captioning messages = [ { "role": "user", "content": [ {"type": "image", "image": image}, {"type": "text", "text": "Describe this image in detail."}, ], } ] # Apply chat template and process inputs text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = processor(text=text, images=image, return_tensors="pt").to(model.device) # Generate caption generated_ids = model.generate(**inputs, max_new_tokens=128) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_text) ``` For more detailed usage, including evaluation and other modalities, please refer to the [official GitHub repository](https://github.com/qishisuren123/AnyCap). --- ## ๐Ÿ“Š Benchmark & Evaluation ### AnyCapEval Benchmark

**Figure 2 โ€“ Evaluation methodology of AnyCapEval.** (a) Examples demonstrating **content** scoring with *Key-point Density* (KPD) and **style** scoring rules. (b) KPD correlation analysis, showing that KPD lengthโ€based metrics achieve the highest Pearson/Spearman/Kendall correlations with human judgments. (c) Radar chart illustrating the large performance gains delivered by **ACM** integration across ten dimensions (IAptโ€“Thm). | | GPT-4o | **GPT-4o + ACM** | InternVL2.5-8B | **InternVL2.5-8B + ACM** | |---|:---:|:---:|:---:|:---:| | **Average โ†‘** | 2.79 | **4.15** | 2.75 | **3.98** | > **Key takeaway โ€ข** ACM boosts GPT-4oโ€™s content scores by **+45 %** and style scores by **+12 %**, and yields similar gains on strong open models, highlighting the reliability and coverage of AnyCapEval. For detailed evaluation scripts and instructions, please refer to the [GitHub repository](https://github.com/qishisuren123/AnyCap). --- ## ๐Ÿ“‚ Dataset ### AnyCapDataset (Coming Soon) High-quality, fully annotated datasets for all three modalities (image, audio, video) will be released soon on HuggingFace. Stay tuned! --- ## ๐Ÿค Contributing We welcome contributions! Please open issues or submit PRs for feedback and improvements. --- ## ๐Ÿ“ Citation ```bibtex @misc{ren2025anycapprojectunifiedframework, title={AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning}, author={Yiming Ren and Zhiqiang Lin and Yu Li and Gao Meng and Weiyun Wang and Junjie Wang and Zicheng Lin and Jifeng Dai and Yujiu Yang and Wenhai Wang and Ruihang Chu}, year={2025}, eprint={2507.12841}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2507.12841}, } ``` --- ## License This project is licensed under the MIT License โ€“ see the [LICENSE](https://github.com/qishisuren123/AnyCap/blob/main/LICENSE) file for details.