--- base_model: - Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers license: apache-2.0 pipeline_tag: image-text-to-text tags: - reasoning - multimodal - vlm - math - visual-question-answering --- # OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles This model was presented in the paper [OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles](https://huggingface.co/papers/2503.17352). ## Abstract We introduce OpenVLThinker, one of the first open-source large vision-language models (LVLMs) to exhibit sophisticated chain-of-thought reasoning, achieving notable performance gains on challenging visual reasoning tasks. While text-based reasoning models (e.g., Deepseek R1) show promising results in text-only tasks, distilling their reasoning into LVLMs via supervised fine-tuning (SFT) often results in performance degradation due to imprecise visual grounding. Conversely, purely reinforcement learning (RL)-based methods face a large search space, hindering the emergence of reflective behaviors in smaller models (e.g., 7B LVLMs). Surprisingly, alternating between SFT and RL ultimately results in significant performance improvements after a few iterations. Our analysis reveals that the base model rarely exhibits reasoning behaviors initially, but SFT effectively surfaces these latent actions and narrows the RL search space, accelerating the development of reasoning capabilities. Each subsequent RL stage further refines the model's reasoning skills, producing higher-quality SFT data for continued self-improvement. OpenVLThinker-7B consistently advances performance across six benchmarks demanding mathematical and general reasoning, notably improving MathVista by 3.8%, EMMA by 2.4%, and HallusionBench by 1.6%. Beyond demonstrating the synergy between SFT and RL for complex reasoning tasks, our findings provide early evidence towards achieving R1-style reasoning in multimodal contexts. The code, model and data are held at this https URL . ## Overview OpenVLThinker-7B is a vision-language reasoning model designed to handle multimodal tasks. It is especially tuned for visual mathematical problem-solving. For more details: [Blog](https://yihe-deng.notion.site/openvlthinker), [GitHub](https://github.com/yihedeng9/OpenVLThinker) ## How to use ```python from transformers import AutoProcessor, Qwen2_5_VLForConditionalGeneration import torch from qwen_vl_utils import process_vision_info import requests from PIL import Image # 1. Define model and processor names model_name = "ydeng9/OpenVLThinker-7B" processor_name = "Qwen/Qwen2.5-VL-7B-Instruct" # 2. Load the OpenVLThinker-7B model and processor device = "cuda:0" if torch.cuda.is_available() else "cpu" model = Qwen2_5_VLForConditionalGeneration.from_pretrained( model_name, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map=device ) processor = AutoProcessor.from_pretrained(processor_name) # 3. Define a sample image URL and an instruction image_url = "https://example.com/sample_image.jpg" # replace with your image URL instruction = "Example question" # 4. Create a multimodal prompt using a chat message structure messages = [ { "role": "user", "content": [ {"type": "image", "image": image_url}, {"type": "text", "text": instruction}, ], } ] # 5. Generate a text prompt from the chat messages text_prompt = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) # 6. Process image (and video) inputs from the messages image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text_prompt], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ).to(device) # 7. Generate the model's response (with specified generation parameters) generated_ids = model.generate( **inputs, do_sample=True, max_new_tokens=2048, top_p=0.001, top_k=1, temperature=0.01, repetition_penalty=1.0, ) # 8. Decode the generated tokens into human-readable text generated_text = processor.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] # 9. Print the generated response print("Generated Response:") print(generated_text) ``` ### Citation ```bibtex @misc{deng2025openvlthinker, title={OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles}, author={Yihe Deng and Hritik Bansal and Fan Yin and Nanyun Peng and Wei Wang and Kai-Wei Chang}, year={2025}, eprint={2503.17352}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://huggingface.co/papers/2503.17352}, } ```