Scaling Laws for Conditional Emergence of Multilingual Image Captioning via Generalization from Translation

Project page

11.2B Model (Fine-tuned)

The fine-tuned variant of the Pre-trained 11.2B Model is supervised on a curated mix of downstream tasks to broaden coverage and improve benchmark performance. Concretely, it is fine-tuned on multimodal translation and ambiguity/disambiguation as well as captioning across multiple caption styles, with missing languages filled in via MT augmentation. Downstream datasets used are: Multi30K, CoMMuTE, COCO Karpathy, XM3600, Image Paragraph, and DOCCI. The model supports the languages English, German, French, Spanish, Russian, and Chinese.

Getting Started

import requests
from PIL import Image
import torch
from transformers import AutoModelForCausalLM, AutoConfig, AutoProcessor, AutoTokenizer


device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("Spravil/caption-via-translation-11_2B-ft", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained(
    "google/gemma-2-2b",
    add_bos_token=True,
    add_eos_token=True,
    padding_side="right",
    truncation_side="right",
)
processor = AutoProcessor.from_pretrained("Spravil/caption-via-translation-11_2B-ft", trust_remote_code=True, new_tokenizer=tokenizer, use_encoder_tokenizer=True)
task = "<MORE_DETAILED_CAPTION>"
lang = "de"
prompt = f"<LANG_{lang.upper()}>{task}"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
inputs = processor(prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
            **inputs,
            max_new_tokens=128,
            num_beams=4,
            do_sample=False,
            use_cache=False,
        )
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task, image_size=(image.width, image.height))
print(parsed_answer)

Bibtex

@inproceedings{spravil2026scaling,
  title={Scaling Laws for Conditional Emergence of Multilingual Image Captioning via Generalization from Translation},
  author={Spravil, Julian and Houben, Sebastian and Behnke, Sven},
  booktitle={Proceedings of the 40th AAAI Conference on Artificial Intelligence},
  year={2026}
}
Downloads last month
18
Safetensors
Model size
11B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Spravil/caption-via-translation-11_2B-ft

Base model

google/gemma-2-9b
Finetuned
(1)
this model

Space using Spravil/caption-via-translation-11_2B-ft 1

Collection including Spravil/caption-via-translation-11_2B-ft

Paper for Spravil/caption-via-translation-11_2B-ft