File size: 4,960 Bytes
33c7475
 
 
 
 
8f30b82
33c7475
 
 
 
 
 
 
 
68a4ecd
33c7475
6eeed60
33c7475
6eeed60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2-VL-2B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- reasoner
- open
- r1
- explainer
---
![zfdsdfg.gif](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/WgW-xws4vzFJj48x2niWX.gif)

# **Open-R1-Mini-Experimental**

The **Open-R1-Mini-Experimental** model is a fine-tuned version of **Qwen/Qwen2-VL-2B-Instruct**, specifically designed for **reasoning tasks**, **context reasoning**, and **multi-modal understanding** based on the **R1 reasoning logits data**. This model integrates a conversational approach with deep reasoning capabilities to handle complex multi-modal tasks efficiently.

#### Key Enhancements:

* **Advanced Contextual Reasoning**: Open-R1-Mini-Experimental achieves state-of-the-art performance in reasoning tasks by leveraging R1 reasoning logits data, enhancing logical inference and decision-making.

* **Understanding images of various resolution & ratio**: The model excels at visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.

* **Long-Context Video Understanding**: Capable of processing and reasoning over videos of 20 minutes or more for high-quality video-based question answering, content creation, and dialogue.

* **Device Integration**: With strong reasoning and decision-making abilities, the model can be integrated into mobile devices, robots, and automation systems for real-time operation based on both visual and textual input.

* **Multilingual Support**: Supports text understanding in various languages within images, including English, Chinese, Japanese, Korean, Arabic, most European languages, and Vietnamese.

### Sample Inference

![open-r1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ay3lb1nG7D-S56fV6qakg.png)

`Demo:` https://huggingface.co/prithivMLmods/Open-R1-Mini-Experimental/blob/main/open-r1-reasoner-doc-py/open-r1-exp.ipynb

### How to Use

```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

# Load the model with automatic device placement
model = Qwen2VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Open-R1-Mini-Experimental", torch_dtype="auto", device_map="auto"
)

# Recommended: Enable flash_attention_2 for better performance in multi-image and video tasks
# model = Qwen2VLForConditionalGeneration.from_pretrained(
#     "prithivMLmods/Open-R1-Mini-Experimental",
#     torch_dtype=torch.bfloat16,
#     attn_implementation="flash_attention_2",
#     device_map="auto",
# )

# Load processor
processor = AutoProcessor.from_pretrained("prithivMLmods/Open-R1-Mini-Experimental")

# Adjust visual token range for optimized memory usage
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Analyze the context of this image."},
        ],
    }
]

# Prepare input
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
### Buffer Handling
```python
    buffer = ""
    for new_text in streamer:
        buffer += new_text
        buffer = buffer.replace("<|im_end|>", "")
        yield buffer
```
### **Key Features**

1. **Advanced Contextual Reasoning:**  
   - Optimized for **context-aware problem-solving** and **logical inference** based on R1 reasoning logits.

2. **Optical Character Recognition (OCR):**  
   - Extracts and processes text from images with exceptional accuracy.

3. **Mathematical and Logical Problem Solving:**  
   - Supports complex reasoning and outputs equations in **LaTeX format**.

4. **Conversational and Multi-Turn Interaction:**  
   - Handles **multi-turn dialogue** with enhanced memory retention and response coherence.

5. **Multi-Modal Inputs & Outputs:**  
   - Processes images, text, and combined inputs to generate insightful analyses.

6. **Secure and Efficient Model Loading:**  
   - Uses **Safetensors** for faster and more secure model weight handling.