Add comprehensive model card for Mini-o3
#1
by
nielsr
HF Staff
- opened
README.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
pipeline_tag: image-text-to-text
|
| 4 |
+
library_name: transformers
|
| 5 |
+
base_model: Qwen/Qwen2.5-VL-7B-Instruct
|
| 6 |
+
datasets:
|
| 7 |
+
- Mini-o3/Mini-o3-Coldstart-Dataset
|
| 8 |
+
tags:
|
| 9 |
+
- visual-search
|
| 10 |
+
- multimodal
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search
|
| 14 |
+
|
| 15 |
+
This repository contains the `Mini-o3-7B-SFT` model checkpoint, an advanced multimodal model from the paper [Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search](https://huggingface.co/papers/2509.07969).
|
| 16 |
+
|
| 17 |
+
The Mini-o3 system addresses limitations in existing open-source visual agents by enabling deep, multi-turn reasoning (spanning tens of steps) and achieving state-of-the-art performance on challenging visual search tasks.
|
| 18 |
+
|
| 19 |
+
- 📚 [Paper](https://huggingface.co/papers/2509.07969)
|
| 20 |
+
- 🌍 [Project Page](https://mini-o3.github.io/)
|
| 21 |
+
- 💻 [GitHub Repository](https://github.com/Mini-o3/Mini-o3)
|
| 22 |
+
- 🤗 [Hugging Face Models](https://huggingface.co/Mini-o3/models)
|
| 23 |
+
- 📦 [Hugging Face Data](https://huggingface.co/Mini-o3/datasets)
|
| 24 |
+
|
| 25 |
+
## Abstract
|
| 26 |
+
Recent advances in large multimodal models have leveraged image-based tools with reinforcement learning to tackle visual problems. However, existing open-source approaches often exhibit monotonous reasoning patterns and allow only a limited number of interaction turns, making them inadequate for difficult tasks that require trial-and-error exploration. In this work, we address this limitation by scaling up tool-based interactions and introduce Mini-o3, a system that executes deep, multi-turn reasoning -- spanning tens of steps -- and achieves state-of-the-art performance on challenging visual search tasks. Our recipe for reproducing OpenAI o3-style behaviors comprises three key components. First, we construct the Visual Probe Dataset, a collection of thousands of challenging visual search problems designed for exploratory reasoning. Second, we develop an iterative data collection pipeline to obtain cold-start trajectories that exhibit diverse reasoning patterns, including depth-first search, trial-and-error, and goal maintenance. Third, we propose an over-turn masking strategy that prevents penalization of over-turn responses (those that hit the maximum number of turns) during reinforcement learning, thereby balancing training-time efficiency with test-time scalability. Despite training with an upper bound of only six interaction turns, our model generates trajectories that naturally scale to tens of turns at inference time, with accuracy improving as the number of turns increases. Extensive experiments demonstrate that Mini-o3 produces rich reasoning patterns and deep thinking paths, effectively solving challenging visual search problems.
|
| 27 |
+
|
| 28 |
+
## Model Description
|
| 29 |
+
This model is the Cold-start Supervised Fine-tuning (SFT) checkpoint (`Mini-o3-7B-SFT`) of the Mini-o3 project. It is based on `Qwen2.5-VL-7B-Instruct` and serves as the initial stage before further reinforcement learning. Mini-o3 is designed to achieve deep, multi-turn reasoning capabilities for complex visual search problems, exhibiting diverse reasoning patterns like depth-first search, trial-and-error, and goal maintenance.
|
| 30 |
+
|
| 31 |
+
## Usage
|
| 32 |
+
For detailed installation instructions, training procedures (Cold-start SFT and Reinforcement Learning), and other advanced usage examples, please refer to the [official GitHub repository](https://github.com/Mini-o3/Mini-o3).
|
| 33 |
+
|
| 34 |
+
## Evaluation Results
|
| 35 |
+
Mini-o3 (7B) achieves SOTA on visual search benchmarks compared to 7B peers, with strong results on VisualProbe, V\* Bench, HR-Bench, and MME-Realworld.
|
| 36 |
+
|
| 37 |
+
| Model | VisualProbe hard | VisualProbe medium | VisualProbe easy | V\* Bench | HR-Bench 4K | HR-Bench 8K | MME-Realworld |
|
| 38 |
+
|---|---:|---:|---:|---:|---:|---:|---:|
|
| 39 |
+
| GPT-4o | 11.2 | 15.4 | 47.5 | 65.2 | 62.0 | 58.3 | 45.2 |
|
| 40 |
+
| LLaVA-OneVision | 13.4 | 12.5 | 36.2 | 70.9 | 61.2 | 54.0 | 57.4 |
|
| 41 |
+
| Qwen2.5-VL-Instruct | 23.9 | 26.0 | 39.1 | 75.5 | 68.2 | 62.7 | 57.3 |
|
| 42 |
+
| SEAL† | – | – | – | 75.4 | – | – | – |
|
| 43 |
+
| DyFo† | – | – | – | 81.2 | – | – | – |
|
| 44 |
+
| Chain-of-Focus† | – | – | – | 88.0 | – | – | – |
|
| 45 |
+
| Pixel Reasoner‡ | 28.8 | 29.6 | 58.4 | 86.3 | 74.0 | 66.9 | 64.4 |
|
| 46 |
+
| DeepEyes‡ | 35.1 | 29.8 | 60.1 | 83.3 | 73.2 | 69.5 | 64.0 |
|
| 47 |
+
| **Mini-o3 (Ours)** | **48.0** | **50.4** | **67.0** | **88.2** | **77.5** | **73.3** | **65.5** |
|
| 48 |
+
|
| 49 |
+
- † The models only report the metric of Avg@1 and the model weights are not available.
|
| 50 |
+
- ‡ Re-evaluated using its official model and evaluation code to yield the metric of Avg@32.
|
| 51 |
+
|
| 52 |
+
## Examples
|
| 53 |
+
Mini-o3 demonstrates rich reasoning patterns and deep thinking paths. Visual examples showcasing its capabilities can be found in the [Examples section of the GitHub repository](https://github.com/Mini-o3/Mini-o3#examples).
|
| 54 |
+
|
| 55 |
+
## Citation
|
| 56 |
+
If you find this repository useful for your research, please consider citing the paper:
|
| 57 |
+
```bibtex
|
| 58 |
+
@article{lai2025mini-o3,
|
| 59 |
+
title={Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search},
|
| 60 |
+
author={Lai, Xin and Li, Junyi and Li, Wei and Liu, Tao and Li, Tianjian and Zhao, Hengshuang},
|
| 61 |
+
journal={arXiv:2509.07969},
|
| 62 |
+
year={2025}
|
| 63 |
+
}
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
## License
|
| 67 |
+
The code is licensed under [Apache 2.0](https://github.com/Mini-o3/Mini-o3/blob/main/LICENSE).
|
| 68 |
+
The data and model checkpoints are licensed under [CC BY NC 4.0](https://github.com/Mini-o3/Mini-o3/blob/main/WEIGHT_LICENSE). They are intended and licensed for research use only and are restricted to uses that follow the license agreement of Qwen2.5-VL.
|