Spravil's picture
Upload README.md with huggingface_hub
ac5f8bd verified
metadata
language:
  - en
  - de
  - fr
  - es
  - ru
  - zh
base_model:
  - microsoft/Florence-2-large
pipeline_tag: image-text-to-text
library_name: transformers
tags:
  - Image-to-Text
  - Image-Text-to-Text
  - Translation
datasets:
  - Spravil/cc12m_ccmatrix_captions_and_translations

Scaling Laws for Conditional Emergence of Multilingual Image Captioning via Generalization from Translation

Project page

1.0B Model

The 1.0B model is built upon Microsoft's Florence-2-large and trained using a synthetic dataset. As a pre-trained version, its coverage across tasks and languages is currently limited. It supports image captioning in English and German, and facilitates multimodal machine translation from English to German, French, Spanish, Russian, and Chinese.

Getting Started

import requests
from PIL import Image
import torch
from transformers import AutoModelForCausalLM, AutoConfig, AutoProcessor, AutoTokenizer


device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("Spravil/caption-via-translation-1_0B", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained(
    "google/gemma-2-2b",
    add_bos_token=True,
    add_eos_token=True,
    padding_side="right",
    truncation_side="right",
)
processor = AutoProcessor.from_pretrained("Spravil/caption-via-translation-1_0B", trust_remote_code=True, new_tokenizer=tokenizer, use_encoder_tokenizer=True)
task = "<MORE_DETAILED_CAPTION>"
lang = "de"
prompt = f"<LANG_{lang.upper()}>{task}"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
inputs = processor(prompt, images=image, return_tensors="pt").to(device, torch_dtype)
generated_ids = model.generate(
            **inputs,
            max_new_tokens=128,
            num_beams=4,
            do_sample=False,
            use_cache=False,
        )
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task, image_size=(image.width, image.height))
print(parsed_answer)

Bibtex

@inproceedings{spravil2026scaling,
  title={Scaling Laws for Conditional Emergence of Multilingual Image Captioning via Generalization from Translation},
  author={Spravil, Julian and Houben, Sebastian and Behnke, Sven},
  booktitle={Proceedings of the 40th AAAI Conference on Artificial Intelligence},
  year={2026}
}