Caption via Translation
Collection
Models and datasets of the paper "Scaling Laws for Conditional Emergence of Multilingual Image Captioning via Generalization from Translation"
•
10 items
•
Updated
The fine-tuned variant of the Pre-trained 3.5B Model is supervised on a curated mix of downstream tasks to broaden coverage and improve benchmark performance. Concretely, it is fine-tuned on multimodal translation and ambiguity/disambiguation as well as captioning across multiple caption styles, with missing languages filled in via MT augmentation. Downstream datasets used are: Multi30K, CoMMuTE, COCO Karpathy, XM3600, Image Paragraph, and DOCCI. The model supports the languages English, German, French, Spanish, Russian, and Chinese.
import requests
from PIL import Image
import torch
from transformers import AutoModelForCausalLM, AutoConfig, AutoProcessor, AutoTokenizer
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("Spravil/caption-via-translation-3_5B-ft", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained(
"google/gemma-2-2b",
add_bos_token=True,
add_eos_token=True,
padding_side="right",
truncation_side="right",
)
processor = AutoProcessor.from_pretrained("Spravil/caption-via-translation-3_5B-ft", trust_remote_code=True, new_tokenizer=tokenizer, use_encoder_tokenizer=True)
task = "<MORE_DETAILED_CAPTION>"
lang = "de"
prompt = f"<LANG_{lang.upper()}>{task}"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
inputs = processor(prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
**inputs,
max_new_tokens=128,
num_beams=4,
do_sample=False,
use_cache=False,
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task, image_size=(image.width, image.height))
print(parsed_answer)
@inproceedings{spravil2026scaling,
title={Scaling Laws for Conditional Emergence of Multilingual Image Captioning via Generalization from Translation},
author={Spravil, Julian and Houben, Sebastian and Behnke, Sven},
booktitle={Proceedings of the 40th AAAI Conference on Artificial Intelligence},
year={2026}
}
Base model
google/gemma-2-2b