prithivMLmods commited on
Commit
4b3cb47
·
verified ·
1 Parent(s): fe394c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -1
README.md CHANGED
@@ -11,4 +11,119 @@ tags:
11
  - open
12
  - r1
13
  - explainer
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - open
12
  - r1
13
  - explainer
14
+ ---
15
+ ![zgdfgzdfg.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/0vbyb6c2URR861vYDbKHn.png)
16
+
17
+ # **Open-R1-Mini-Experimental-GGUF**
18
+
19
+ The **Open-R1-Mini-Experimental-GGUF** model is a fine-tuned version of **Qwen/Qwen2-VL-2B-Instruct**, specifically designed for **reasoning tasks**, **context reasoning**, and **multi-modal understanding** based on the **R1 reasoning logits data**. This model integrates a conversational approach with deep reasoning capabilities to handle complex multi-modal tasks efficiently.
20
+
21
+ #### Key Enhancements:
22
+
23
+ * **Advanced Contextual Reasoning**: Open-R1-Mini-Experimental-GGUF achieves state-of-the-art performance in reasoning tasks by leveraging R1 reasoning logits data, enhancing logical inference and decision-making.
24
+
25
+ * **Understanding images of various resolution & ratio**: The model excels at visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
26
+
27
+ * **Long-Context Video Understanding**: Capable of processing and reasoning over videos of 20 minutes or more for high-quality video-based question answering, content creation, and dialogue.
28
+
29
+ * **Device Integration**: With strong reasoning and decision-making abilities, the model can be integrated into mobile devices, robots, and automation systems for real-time operation based on both visual and textual input.
30
+
31
+ * **Multilingual Support**: Supports text understanding in various languages within images, including English, Chinese, Japanese, Korean, Arabic, most European languages, and Vietnamese.
32
+
33
+ ### Sample Inference
34
+
35
+ ![open-r1.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ay3lb1nG7D-S56fV6qakg.png)
36
+
37
+ `Demo:` https://huggingface.co/prithivMLmods/Open-R1-Mini-Experimental/blob/main/open-r1-reasoner-doc-py/open-r1-exp.ipynb
38
+
39
+ ### How to Use
40
+
41
+ ```python
42
+ from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
43
+ from qwen_vl_utils import process_vision_info
44
+
45
+ # Load the model with automatic device placement
46
+ model = Qwen2VLForConditionalGeneration.from_pretrained(
47
+ "prithivMLmods/Open-R1-Mini-Experimental", torch_dtype="auto", device_map="auto"
48
+ )
49
+
50
+ # Recommended: Enable flash_attention_2 for better performance in multi-image and video tasks
51
+ # model = Qwen2VLForConditionalGeneration.from_pretrained(
52
+ # "prithivMLmods/Open-R1-Mini-Experimental",
53
+ # torch_dtype=torch.bfloat16,
54
+ # attn_implementation="flash_attention_2",
55
+ # device_map="auto",
56
+ # )
57
+
58
+ # Load processor
59
+ processor = AutoProcessor.from_pretrained("prithivMLmods/Open-R1-Mini-Experimental-GGUF")
60
+
61
+ # Adjust visual token range for optimized memory usage
62
+ # min_pixels = 256*28*28
63
+ # max_pixels = 1280*28*28
64
+ # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
65
+
66
+ messages = [
67
+ {
68
+ "role": "user",
69
+ "content": [
70
+ {
71
+ "type": "image",
72
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
73
+ },
74
+ {"type": "text", "text": "Analyze the context of this image."},
75
+ ],
76
+ }
77
+ ]
78
+
79
+ # Prepare input
80
+ text = processor.apply_chat_template(
81
+ messages, tokenize=False, add_generation_prompt=True
82
+ )
83
+ image_inputs, video_inputs = process_vision_info(messages)
84
+ inputs = processor(
85
+ text=[text],
86
+ images=image_inputs,
87
+ videos=video_inputs,
88
+ padding=True,
89
+ return_tensors="pt",
90
+ )
91
+ inputs = inputs.to("cuda")
92
+
93
+ # Inference
94
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
95
+ generated_ids_trimmed = [
96
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
97
+ ]
98
+ output_text = processor.batch_decode(
99
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
100
+ )
101
+ print(output_text)
102
+ ```
103
+ ### Buffer Handling
104
+ ```python
105
+ buffer = ""
106
+ for new_text in streamer:
107
+ buffer += new_text
108
+ buffer = buffer.replace("<|im_end|>", "")
109
+ yield buffer
110
+ ```
111
+ ### **Key Features**
112
+
113
+ 1. **Advanced Contextual Reasoning:**
114
+ - Optimized for **context-aware problem-solving** and **logical inference** based on R1 reasoning logits.
115
+
116
+ 2. **Optical Character Recognition (OCR):**
117
+ - Extracts and processes text from images with exceptional accuracy.
118
+
119
+ 3. **Mathematical and Logical Problem Solving:**
120
+ - Supports complex reasoning and outputs equations in **LaTeX format**.
121
+
122
+ 4. **Conversational and Multi-Turn Interaction:**
123
+ - Handles **multi-turn dialogue** with enhanced memory retention and response coherence.
124
+
125
+ 5. **Multi-Modal Inputs & Outputs:**
126
+ - Processes images, text, and combined inputs to generate insightful analyses.
127
+
128
+ 6. **Secure and Efficient Model Loading:**
129
+ - Uses **Safetensors** for faster and more secure model weight handling.