chrisc36 commited on
Commit
3f47d4a
·
verified ·
1 Parent(s): 9a03098

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -4
README.md CHANGED
@@ -1,6 +1,45 @@
1
- MolmoPoint's HF inference works the same, but we recommend running it with
2
- with `logits_processor=model.build_logit_processor_from_inputs(model_inputs)`
3
- to enforce points tokens are generated correctly.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  In MolmoPoint, instead of coordinates points will be generated as a series of special
6
  tokens, to decode the tokens back into points requires some additional
@@ -9,7 +48,6 @@ The metadata is returned by the preprocessor using the `return_pointing_metadata
9
  Then `model.extract_image_points` and `model.extract_video_points` do the decoding, they
10
  return a list of ({image_id|timestamps}, object_id, pixel_x, pixel_y) output points.
11
 
12
- Note the huggingface MolmoPoint model does not support training.
13
 
14
  ### Image Pointing Example:
15
 
@@ -120,3 +158,7 @@ points = model.extract_video_points(
120
  )
121
  print(points)
122
  ```
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen3-8B
7
+ - google/siglip-so400m-patch14-384
8
+ pipeline_tag: image-text-to-text
9
+ library_name: transformers
10
+ tags:
11
+ - multimodal
12
+ - olmo
13
+ - molmo
14
+ - molmo2
15
+ ---
16
+
17
+ # MolmoPoint-8B
18
+ MolmoPoint-8B is a fully-open VLM developed by the Allen Institute for AI (Ai2) that support image, video and multi-image understanding and grounding.
19
+ It has novel pointing mechansim that improves image pointing, video pointing, and video tracking, see our technical report for details.
20
+
21
+ Note the huggingface MolmoPoint model does not support training, see our github repo for the training code.
22
+
23
+ Quick links:
24
+ - 💬 [Code](https://github.com/allenai/molmo2)
25
+ - 📂 [All Models](https://huggingface.co/collections/allenai/molmo_point)
26
+ - 📃 [Paper](https://allenai.org/papers/molmo_point)
27
+ - 📝 [Blog](https://allenai.org/blog/molmo_point)
28
+
29
+
30
+ ## Quick Start
31
+
32
+ ### Setup Conda Environment
33
+ ```
34
+ conda create --name transformers4571 python=3.11
35
+ conda activate transformers4571
36
+ pip install transformers==4.57.1
37
+ pip install torch pillow einops torchvision accelerate decord2
38
+ ```
39
+
40
+ ## Inference
41
+ We recommend running MolmoPoint with `logits_processor=model.build_logit_processor_from_inputs(model_inputs)`
42
+ to enforce points tokens are generated in a valid way.
43
 
44
  In MolmoPoint, instead of coordinates points will be generated as a series of special
45
  tokens, to decode the tokens back into points requires some additional
 
48
  Then `model.extract_image_points` and `model.extract_video_points` do the decoding, they
49
  return a list of ({image_id|timestamps}, object_id, pixel_x, pixel_y) output points.
50
 
 
51
 
52
  ### Image Pointing Example:
53
 
 
158
  )
159
  print(points)
160
  ```
161
+
162
+ ## License and Use
163
+
164
+ This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2’s Responsible Use Guidelines. This model is trained on third party datasets that are subject to academic and non-commercial research use only. Please review the sources to determine if this model is appropriate for your use case.