Frankenstein-RL

Frankenstein-RL is the reinforced (reinforcement training after cold-start initialization) model from the paper:

What does RL improve for Visual Reasoning? A Frankenstein-Style Analysis

Xirui Li*, Ming Li*, Tianyi Zhou

University of Maryland  |  Mohamed bin Zayed University of Artificial Intelligence

(* Co-first Authors)

This model serves as the IN (Instruction-tuned) checkpoint before reinforcement learning, built on the OpenMMReasoner training recipe with Qwen2.5-VL-3B-Instruct as the base model.

Overview

Our paper introduces a Frankenstein-style analysis framework to understand what reinforcement learning (RL) actually improves in vision-language models (VLMs) for visual reasoning. Rather than relying on end-to-end benchmark scores, we decompose VLMs at the granularity of transformer layers and probe their functional roles through:

  1. Functional Localization via Causal Probing — localizing vision- and reasoning-related computations along transformer depth
  2. Update Characterization via Parameter Comparison — showing that IN and RL differ systematically in update magnitude and geometry
  3. Transferability Test via Model Merging — transplanting RL-refined regions into IN models to test causal contributions

Key Findings

  • RL does not uniformly improve visual perception or standalone reasoning
  • RL induces structured refinements concentrated in mid-to-late layers, improving vision-to-reasoning alignment
  • These mid-to-late refinements are both transferable (via merging) and necessary (via freezing) for RL gains
  • Freezing late layers during RL training leads to a pronounced drop in reasoning performance

Evaluation Results

Fine-grained and Benchmark Metrics

Model Vision (M_vis) Vision-to-Reasoning (M_v2r) Reasoning (M_rea) MathVista MathVision MathVerse
Frankenstein-IN (this model) 34.0 21.0 26.0 46.5 18.4 37.0
Frankenstein-RL 33.0 29.0 34.0 48.1 14.1 37.8

Parameter Freezing Analysis (RL Training)

Model Vision (M_vis) Vision-to-Reasoning (M_v2r) Reasoning (M_rea) MathVista MathVision MathVerse
RL - Frozen Early Block 35.0 31.0 36.0 48.2 21.0 34.5
RL - Frozen Mid Block 25.0 29.0 38.0 46.5 15.5 35.7
RL - Frozen Late Block 30.0 27.0 34.0 47.9 16.8 35.0

Quick Start

Installation

pip install transformers accelerate
pip install qwen-vl-utils[decord]==0.0.8

Inference

from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info

model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "AIcell/Frankenstein-IN",
    torch_dtype="auto",
    device_map="auto",
)

processor = AutoProcessor.from_pretrained("AIcell/Frankenstein-IN")

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "https://your-image-url.jpg"},
            {"type": "text", "text": "Please solve this math problem step by step."},
        ],
    }
]

text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
).to(model.device)

generated_ids = model.generate(**inputs, max_new_tokens=2048)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])

Related Resources

Resource Link
Paper arXiv:2602.12395
Frankenstein-RL Model AIcell/Frankenstein-RL
Base Model Qwen/Qwen2.5-VL-3B-Instruct
OpenMMReasoner arXiv:2511.16334

Citation

@article{li2026frankenstein,
  title={What does RL improve for Visual Reasoning? A Frankenstein-Style Analysis},
  author={Li, Xirui and Li, Ming and Zhou, Tianyi},
  journal={arXiv preprint arXiv:2602.12395},
  year={2026}
}

License

This model is released under the Apache 2.0 License.

Downloads last month
45
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AIcell/Frankenstein-RL

Finetuned
(670)
this model

Collection including AIcell/Frankenstein-RL

Papers for AIcell/Frankenstein-RL