Image-Text-to-Text
Transformers
English
multimodal
olmo
molmo
molmo2
zixianma02 commited on
Commit
0b49dd8
·
verified ·
1 Parent(s): 840fc3c

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +56 -0
  2. molmoweb_logo.png +0 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/MolmoWeb-SyntheticTraj
5
+ - allenai/MolmoWeb-HumanTrajs
6
+ - allenai/MolmoWeb-HumanSkills
7
+ - allenai/MolmoWeb-SyntheticSkills
8
+ - allenai/MolmoWeb-SyntheticQA
9
+ - allenai/MolmoWeb-SyntheticGround
10
+ language:
11
+ - en
12
+ base_model:
13
+ - Qwen/Qwen3-8B
14
+ - google/siglip-so400m-patch14-384
15
+ pipeline_tag: image-text-to-text
16
+ library_name: transformers
17
+ tags:
18
+ - multimodal
19
+ - olmo
20
+ - molmo
21
+ - molmo2
22
+ ---
23
+
24
+ <img src="molmoweb_logo.png" alt="Logo for the MolmoWeb Project" style="width: auto; height: 50px;">
25
+
26
+ # MolmoWeb-8B-Native
27
+
28
+ **Note** that this is the molmo-native checkpoint, and it's NOT Huggingface/transformers-compatible. Check out [allenai/MolmoWeb-8B](https://huggingface.co/allenai/MolmoWeb-8B) for HF-compatible checkpoint.
29
+
30
+ MolmoWeb is a family of fully open multimodal web agents. MolmoWeb agents achieve state-of-the-art results outperforming similar scale open-weight-only
31
+ models such as Fara-7B, UI-Tars-1.5-7B, and Holo1-7B. MolmoWeb-8B also surpasses set-of-marks
32
+ (SoM) agents built on much larger closed frontier models like GPT-4o. We further demonstrate
33
+ consistent gains through test-time scaling via parallel rollouts with best-of-N selection, achieving 94.7%
34
+ and 60.5% pass@4 (compared to 78.2% and 35.3% pass@1)on WebVoyager and Online-Mind2Web
35
+ respectively.
36
+
37
+ **Learn more** about the MolmoWeb family in our announcement [blog post](https://allenai.org/blog/molmoweb) and [tech report](https://allenai.org/papers/molmoweb).
38
+
39
+ MolmoWeb-8B-Native is based on [Molmo2](https://arxiv.org/abs/2601.10611) architecture, which uses [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) and [SigLIP 2](https://huggingface.co/google/siglip-so400m-patch14-384) as vision backbone.
40
+
41
+ Ai2 is committed to open science. The MolmoWeb datasets are available [here](https://huggingface.co/collections/allenai/molmoweb-data).
42
+ All other artifacts used in creating MolmoWeb (training code, [evaluations](https://github.com/allenai/molmoweb), intermediate checkpoints) will be made available, furthering our commitment to open-source AI development and reproducibility.
43
+
44
+ Quick links:
45
+ - 💬 [Demo](https://molmoweb.allen.ai/)
46
+ - 📂 [All Models](https://huggingface.co/collections/allenai/molmoweb)
47
+ - 📚 [All Data](https://huggingface.co/collections/allenai/molmoweb-data)
48
+ - 📃 [Paper](https://allenai.org/papers/molmoweb)
49
+ - 🎥 [Blog with Videos](https://allenai.org/blog/molmoweb)
50
+
51
+ ## Usage
52
+ Please refer to our [Github repo](https://github.com/allenai/molmoweb/) for inference code.
53
+
54
+ ## License and Use
55
+
56
+ This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2’s [Responsible Use Guidelines](https://allenai.org/responsible-use).
molmoweb_logo.png ADDED