Image-Text-to-Text
Transformers
Safetensors
English
Helium1_VL_2B
custom_code
ameroyer nielsr HF Staff commited on
Commit
943bec4
·
verified ·
1 Parent(s): 1126ea7

Add library_name, paper link, and sample usage (#1)

Browse files

- Add library_name, paper link, and sample usage (47914ca3385b190985b65860221cd365cef9d430)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +81 -7
README.md CHANGED
@@ -1,15 +1,89 @@
1
  ---
2
- language:
3
- - en
4
  base_model:
5
  - kyutai/helium-1-2b
6
- pipeline_tag: image-text-to-text
7
- license: cc-by-nc-sa-4.0
8
  datasets:
9
  - HuggingFaceM4/FineVision
10
  - mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
 
 
 
 
 
11
  ---
12
- Please refer to the [main model card](https://huggingface.co/kyutai/CASA-Helium1-VL-2B) for more information and instructions to run.
13
 
14
- This model page contains model weights for `Helium1-VL-2B`, a Helium1-2B model which is instruct-tuned and further trained to handle visual inputs using a pretrained encoder from Qwen-2.5VL.
15
- This model is released as part of our CASA model release. We provide model weights for CASA models in the associated model collection.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  base_model:
3
  - kyutai/helium-1-2b
 
 
4
  datasets:
5
  - HuggingFaceM4/FineVision
6
  - mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
7
+ language:
8
+ - en
9
+ license: cc-by-nc-sa-4.0
10
+ pipeline_tag: image-text-to-text
11
+ library_name: transformers
12
  ---
 
13
 
14
+ # Helium1-VL-2B
15
+
16
+ `Helium1-VL-2B` is an instruct-tuned vision-language model (VLM) based on the [Helium1-2B](https://huggingface.co/kyutai/helium-1-2b) text-only language model and a pretrained vision encoder from [Qwen-2.5VL](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
17
+
18
+ This model is released as part of the **CASA** project. While the CASA architecture focuses on cross-attention fusion, `Helium1-VL-2B` serves as a high-performance **token insertion** baseline, achieving state-of-the-art results among models of comparable size trained on publicly available datasets.
19
+
20
+ - **Paper:** [CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion](https://huggingface.co/papers/2512.19535)
21
+ - **Project Page:** [https://kyutai.org/casa](https://kyutai.org/casa)
22
+ - **GitHub Repository:** [https://github.com/kyutai-labs/casa](https://github.com/kyutai-labs/casa)
23
+
24
+ ## Sample Usage
25
+
26
+ You can run inference using the following code snippet. This model requires `trust_remote_code=True` to load the custom architecture.
27
+
28
+ ```python
29
+ import torch
30
+ from transformers.models.auto.modeling_auto import AutoModel
31
+ from transformers.models.auto.processing_auto import AutoProcessor
32
+
33
+ model_id = "kyutai/Helium1-VL-2B"
34
+ model = AutoModel.from_pretrained(
35
+ model_id,
36
+ torch_dtype=torch.bfloat16,
37
+ attn_implementation="flash_attention_2",
38
+ trust_remote_code=True,
39
+ ).cuda()
40
+ processor = AutoProcessor.from_pretrained(
41
+ model_id,
42
+ trust_remote_code=True,
43
+ )
44
+
45
+ conversation = [
46
+ {
47
+ "role": "user",
48
+ "content": [
49
+ {
50
+ "type": "image",
51
+ "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.png",
52
+ },
53
+ {
54
+ "type": "text",
55
+ "text": "Describe this image.",
56
+ },
57
+ ],
58
+ },
59
+ ]
60
+
61
+ inputs = processor.tokenize_messages(messages=conversation)
62
+ inputs = inputs.to(model.device)
63
+ input_len = inputs["input_ids"].shape[1]
64
+
65
+ output_ids = model.generate_from_image(
66
+ **inputs,
67
+ max_new_tokens=512,
68
+ pre_image_tokens=processor.pre_image_tokens,
69
+ post_image_tokens=processor.post_image_tokens,
70
+ eos_token_id=model.generation_config.eos_token_id,
71
+ )[0, input_len:]
72
+
73
+ response = processor.tokenizer.decode(output_ids, skip_special_tokens=True)
74
+ print(response)
75
+ ```
76
+
77
+ ## Citation
78
+
79
+ If you use this model or the CASA fusion paradigm in your research, please cite:
80
+
81
+ ```bibtex
82
+ @article{kyutai2025casa,
83
+ author = {Moritz B\"ohle and Am\'elie Royer and Juliette Marrie and Edouard Grave and Patrick P\'erez},
84
+ year = {2025},
85
+ title = {CASA: Cross-Attention via Self-Attention for Efficient Vision-Language Fusion},
86
+ journal = {ArXiv},
87
+ url = {https://arxiv.org/abs/2512.19535}
88
+ }
89
+ ```