Scaling Laws for Conditional Emergence of Multilingual Image Captioning via Generalization from Translation

Project page

3.5B Model

The 3.5B model is built upon Google's Gemma-2 as decoder, Microsoft's Florence-2-large as encoder and is trained using a synthetic dataset. As a pre-trained version, its coverage across tasks and languages is currently limited. It supports image captioning in English and German, and facilitates multimodal machine translation from English to German, French, Spanish, Russian, and Chinese.

Getting Started

import requests
from PIL import Image
import torch
from transformers import AutoModelForCausalLM, AutoConfig, AutoProcessor, AutoTokenizer


device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("Spravil/caption-via-translation-3_5B", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained(
    "google/gemma-2-2b",
    add_bos_token=True,
    add_eos_token=True,
    padding_side="right",
    truncation_side="right",
)
processor = AutoProcessor.from_pretrained("Spravil/caption-via-translation-3_5B", trust_remote_code=True, new_tokenizer=tokenizer, use_encoder_tokenizer=True)
task = "<MORE_DETAILED_CAPTION>"
lang = "de"
prompt = f"<LANG_{lang.upper()}>{task}"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
inputs = processor(prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
            **inputs,
            max_new_tokens=128,
            num_beams=4,
            do_sample=False,
            use_cache=False,
        )
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task, image_size=(image.width, image.height))
print(parsed_answer)

Bibtex

@inproceedings{spravil2026scaling,
  title={Scaling Laws for Conditional Emergence of Multilingual Image Captioning via Generalization from Translation},
  author={Spravil, Julian and Houben, Sebastian and Behnke, Sven},
  booktitle={Proceedings of the 40th AAAI Conference on Artificial Intelligence},
  year={2026}
}
Downloads last month
8
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Spravil/caption-via-translation-3_5B

Base model

google/gemma-2-2b
Finetuned
(380)
this model
Finetunes
1 model

Dataset used to train Spravil/caption-via-translation-3_5B

Space using Spravil/caption-via-translation-3_5B 1

Collection including Spravil/caption-via-translation-3_5B

Paper for Spravil/caption-via-translation-3_5B