ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding
Paper โข 2501.05452 โข Published โข 15
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("ReFocus/Trained_Model", dtype="auto")This repo contains the model for the paper "ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding"
๐ Homepage |๐ Paper | ๐ Code
We follow the Phi-3 Cookbook for the supervised finetuning experiments.
We release our best finetuned ReFocus model with full chain-of-thought data in this HuggingFace Link.
This model is finetuned based on Phi-3.5-vision, and we used the following prompt during evaluation
<|image|>\n{question}\nThought:
To enforce the model to generate bounding box coordinates to refocus, you could try this prompt:
<|image_1|>\n{question}\nThought: The areas to focus on in the image have bounding box coordinates:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="ReFocus/Trained_Model")