File size: 3,262 Bytes
21427eb
debd200
2616884
 
 
 
 
 
 
 
21427eb
 
2616884
 
 
 
 
 
21427eb
2616884
 
 
debd200
2616884
 
 
debd200
21427eb
 
2616884
debd200
 
 
2616884
debd200
2616884
debd200
 
2616884
debd200
 
 
 
 
 
 
 
2616884
debd200
2616884
debd200
2616884
debd200
 
 
 
 
 
 
 
 
2616884
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
library_name: transformers
pipeline_tag: image-text-to-text
tags:
- multimodal
- vision-language-model
- small-language-model
base_model:
- google/siglip-so400m-patch14-384
- Qwen/Qwen3-0.6B
---

# Extract+Think Model Card for markendo/llava-extract-from-scratch-qwen3-0.6B

This repository hosts the **Extract-0.6B<sup>โ€ </sup>** model, which serves as the perception module for the two-stage **Extract+Think<sup>โ€ </sup>** framework. This model was presented in the paper [Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models](https://huggingface.co/papers/2511.17487).

Extract+Think is an approach designed to address perception and reasoning bottlenecks in small multimodal models. It focuses on visual extraction tuning, explicitly training the model to consistently extract instruction-relevant visual details across tasks, which then feeds into a separate reasoning stage.
In this variant, we train from scratch under the visual extraction tuning paradigm, without previous visual instruction tuning or captioning.

*   ๐Ÿ“– **Paper:** [Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models](https://huggingface.co/papers/2511.17487)
*   ๐ŸŒ **Project Page:** https://web.stanford.edu/~markendo/projects/downscaling_intelligence
*   ๐Ÿ’ป **Code:** https://github.com/markendo/downscaling_intelligence

<p align="center">
<img src="https://github.com/markendo/downscaling_intelligence/raw/main/assets/downscaling_intelligence.png", width="500" height="auto">
</p>

## Model details

Extract-0.6B<sup>โ€ </sup> is used as the perception module for the two-stage Extract+Think<sup>โ€ </sup> framework. For the reasoning stage, the authors primarily utilize Qwen3 models ([1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) and [4B](https://huggingface.co/Qwen/Qwen3-4B)).

## Usage

To use this model, particularly for evaluation, the authors utilize the `lmms-eval` framework. The setup and evaluation instructions are detailed in the [GitHub repository](https://github.com/markendo/downscaling_intelligence). This involves cloning the repository, installing dependencies, and integrating custom evaluation files with `lmms-eval`.

For generating extracted visual information, the following command is provided:
```bash
cd lmms-eval
model_name=markendo/llava-extract-from-scratch-qwen3-0.6B
python -m lmms_eval \
    --model=llava_onevision \
    --model_args=pretrained=$model_name,conv_template=qwen_1_5,device_map=auto \
    --tasks=mmstar_prism_stage_1 \
    --batch_size=1 \
    --output_path results \
    --log_samples
```
Please refer to the [GitHub repository](https://github.com/markendo/downscaling_intelligence) for full setup instructions, including the second stage of reasoning.

## Acknowledgments

This repository is built on top of [LLaVA-OneVision](https://github.com/LLaVA-VL/LLaVA-NeXT) and [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).

## Citation
```bib
@article{endo2025downscalingintelligence,
  author    = {Endo, Mark and Yeung-Levy, Serena},
  title     = {Downscaling Intelligence: Exploring Perception and Reasoning Bottlenecks in Small Multimodal Models},
  journal   = {arXiv preprint},
  year      = {2025},
}
```