zixianma02 commited on
Commit
6de6ae8
·
verified ·
1 Parent(s): a4a0021

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -0
README.md ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/Molmo2-VideoPoint
5
+ language:
6
+ - en
7
+ base_model:
8
+ - google/siglip-so400m-patch14-384
9
+ - Qwen/Qwen3-4B-Instruct-2507
10
+ pipeline_tag: video-text-to-text
11
+ library_name: transformers
12
+ tags:
13
+ - multimodal
14
+ - olmo
15
+ - molmo
16
+ - molmo2
17
+ ---
18
+
19
+ <img src="molmo_2_logo_RGB.png" alt="Logo for the Molmo2 Project" style="width: auto; height: 50px;">
20
+
21
+ # Molmo2-4B
22
+
23
+ Molmo2 is a family of open vision-language models developed by the Allen Institute for AI (Ai2) that support image, video and multi-image understanding and grounding.
24
+ Molmo2 models are trained on publicly available third party datasets as referenced in [our technical report](https://allenai.org/papers/molmo2) and [Molmo2 data](https://huggingface.co/collections/allenai/molmo2-data),
25
+ a collection of datasets with highly-curated image-text and video-text pairs.
26
+ It has state-of-the-art performance among multimodal models with a similar size.
27
+ You can find all models in the Molmo2 family [here](https://huggingface.co/collections/allenai/molmo2).
28
+
29
+ **Learn more** about the Molmo2 family [in our announcement blog post](https://allenai.org/blog/molmo2).
30
+
31
+ Molmo2-VideoPoint-4B is based on [Qwen3-4B-Instruct](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) and uses [SigLIP 2](https://huggingface.co/google/siglip-so400m-patch14-384) as vision backbone.
32
+ It is mostly trained on the Molmo2-VideoPoint data only and meant to be used for video pointing and counting only.
33
+
34
+ Ai2 is commited to open science. The Molmo2 datasets are available [here](https://huggingface.co/collections/allenai/molmo2-data).
35
+ All other artifacts used in creating Molmo2 (training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
36
+
37
+ Quick links:
38
+ - 📂 [All Models](https://huggingface.co/collections/allenai/molmo2)
39
+ - 📃 [Paper](https://allenai.org/papers/molmo2)
40
+ - 🎥 [Blog with Videos](https://allenai.org/blog/molmo2)
41
+
42
+ ## Quick Start
43
+
44
+ ### Setup Conda Environment
45
+ ```
46
+ conda create --name transformers4571 python=3.11
47
+ conda activate transformers4571
48
+ pip install transformers==4.57.1
49
+ pip install torch pillow einops torchvision accelerate decord2 molmo_utils
50
+ ```
51
+
52
+ ### Pointing Video QA
53
+
54
+ ```
55
+ from transformers import AutoProcessor, AutoModelForImageTextToText
56
+ import torch
57
+ from molmo_utils import process_vision_info
58
+ import re
59
+
60
+ model_id="allenai/Molmo2-4B"
61
+
62
+ # load the processor
63
+ processor = AutoProcessor.from_pretrained(
64
+ model_id,
65
+ trust_remote_code=True,
66
+ dtype="auto",
67
+ device_map="auto"
68
+ )
69
+
70
+ # load the model
71
+ model = AutoModelForImageTextToText.from_pretrained(
72
+ model_id,
73
+ trust_remote_code=True,
74
+ dtype="auto",
75
+ device_map="auto"
76
+ )
77
+
78
+ COORD_REGEX = re.compile(rf"<(?:points|tracks).*? coords=\"([0-9\t:;, .]+)\"/?>")
79
+ FRAME_REGEX = re.compile(rf"(?:^|\t|:|,|;)([0-9\.]+) ([0-9\. ]+)")
80
+ POINTS_REGEX = re.compile(r"([0-9]+) ([0-9]{3,4}) ([0-9]{3,4})")
81
+
82
+ def _points_from_num_str(text, image_w, image_h, extract_ids=False):
83
+ all_points = []
84
+ for points in POINTS_REGEX.finditer(text):
85
+ ix, x, y = points.group(1), points.group(2), points.group(3)
86
+ # our points format assume coordinates are scaled by 1000
87
+ x, y = float(x)/1000*image_w, float(y)/1000*image_h
88
+ if 0 <= x <= image_w and 0 <= y <= image_h:
89
+ yield ix, x, y
90
+
91
+
92
+ def extract_video_points(text, image_w, image_h, extract_ids=False):
93
+ """Extract video pointing coordinates as a flattened list of (t, x, y) triplets from model output text."""
94
+ all_points = []
95
+ for coord in COORD_REGEX.finditer(text):
96
+ for point_grp in FRAME_REGEX.finditer(coord.group(1)):
97
+ frame_id = float(point_grp.group(1))
98
+ w, h = (image_w, image_h)
99
+ for idx, x, y in _points_from_num_str(point_grp.group(2), w, h):
100
+ if extract_ids:
101
+ all_points.append((frame_id, idx, x, y))
102
+ else:
103
+ all_points.append((frame_id, x, y))
104
+ return all_points
105
+
106
+ messages = [
107
+ {
108
+ "role": "user",
109
+ "content": [
110
+ dict(type="text", text="Point to the penguins."),
111
+ dict(type="video", video="https://storage.googleapis.com/oe-training-public/demo_videos/many_penguins.mp4"),
112
+ ],
113
+ }
114
+ ]
115
+
116
+ # process the video using `molmo_utils.process_vision_info`
117
+ _, videos, video_kwargs = process_vision_info(messages)
118
+ videos, video_metadatas = zip(*videos)
119
+ videos, video_metadatas = list(videos), list(video_metadatas)
120
+
121
+ # apply the chat template to the input messages
122
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
123
+
124
+ # process the video and text
125
+ inputs = processor(
126
+ videos=videos,
127
+ video_metadata=video_metadatas,
128
+ text=text,
129
+ padding=True,
130
+ return_tensors="pt",
131
+ **video_kwargs,
132
+ )
133
+
134
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
135
+
136
+ # generate output
137
+ with torch.inference_mode():
138
+ generated_ids = model.generate(**inputs, max_new_tokens=2048)
139
+
140
+ # only get generated tokens; decode them to text
141
+ generated_tokens = generated_ids[0, inputs['input_ids'].size(1):]
142
+ generated_text = processor.tokenizer.decode(generated_tokens, skip_special_tokens=True)
143
+
144
+ # decode video pointing outputs
145
+ points = extract_video_points(generated_text, image_w=video_metadatas[0]["width"], image_h=video_metadatas[0]["height"])
146
+ print(points)
147
+ ```
148
+
149
+ ## Evaluations
150
+
151
+ We report the Average Score on 15 Academic Benchmarks here.
152
+ For details on the evals, refer to the main video results table in our [technical report](https://allenai.org/papers/molmo2).
153
+
154
+ | Model | Average Score on 15 Academic Benchmarks |
155
+ |-----------------------------|-----------------------------------------|
156
+ | GPT-5 | 70.6 |
157
+ | GPT-5 mini | 65.0 |
158
+ | Gemini 3 Pro | 70.0 |
159
+ | Gemini 2.5 Pro | 71.2 |
160
+ | Gemini 2.5 Flash | 66.7 |
161
+ | Claude Sonnet 4.5 | 59.6 |
162
+ | InternVL3.5-4B | 53.4 |
163
+ | InternVL3.5-8B | 54.1 |
164
+ | Qwen3-VL-4B | 58.1 |
165
+ | Qwen3-VL-8B | 59.5 |
166
+ | Keye-VL-1.5-8B | 55.7 |
167
+ | GLM-4.1V-9B | 56.9 |
168
+ | MiniCPM-V-4.5-8B | 56.6 |
169
+ | Eagle2.5-8B | 60.7 |
170
+ | PLM-3B | 53.9 |
171
+ | PLM-8B | 56.2 |
172
+ | LLaVA-Video-7B | 52.7 |
173
+ | VideoChat-Flash-7B | 56.1 |
174
+ | **Molmo2-4B (this model)** | 62.8 |
175
+ | Molmo2-8B | 63.1 |
176
+ | Molmo2-7B | 59.7 |
177
+
178
+ ## License and Use
179
+
180
+ This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2’s [Responsible Use Guidelines](https://allenai.org/responsible-use).
181
+ This model is trained on third party datasets that are subject to academic and non-commercial research use only. Please review the sources to determine if this model is appropriate for your use case.