Spaces:
Running
on
Zero
Running
on
Zero
yuhangzang
Claude
commited on
Commit
Β·
1e5cd04
1
Parent(s):
c41006e
Add Gradio Space for Spatial-SSRL spatial reasoning demo
Browse files- Add app.py with question-answering interface for spatial reasoning
- Add requirements.txt with necessary dependencies
- Update README.md with Space metadata and documentation
- Add example images (eg1.jpg, eg2.jpg, eg3.jpg) tracked with Git LFS
- Add QA.txt with example questions and expected responses
π€ Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- .gitattributes +3 -0
- README.md +28 -5
- app.py +209 -0
- examples/QA.txt +16 -0
- examples/eg1.jpg +3 -0
- examples/eg2.jpg +3 -0
- examples/eg3.jpg +3 -0
- requirements.txt +8 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
examples/*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
examples/*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
examples/*.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -1,14 +1,37 @@
|
|
| 1 |
---
|
| 2 |
-
title: Spatial
|
| 3 |
-
emoji:
|
| 4 |
colorFrom: blue
|
| 5 |
-
colorTo:
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 5.49.1
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
license: apache-2.0
|
| 11 |
-
short_description:
|
| 12 |
---
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Spatial-SSRL Spatial Reasoning
|
| 3 |
+
emoji: π
|
| 4 |
colorFrom: blue
|
| 5 |
+
colorTo: indigo
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 5.49.1
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
license: apache-2.0
|
| 11 |
+
short_description: Spatial reasoning with vision-language models
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# π Spatial-SSRL: Spatial Reasoning with Vision-Language Models
|
| 15 |
+
|
| 16 |
+
This demo showcases the spatial reasoning capabilities of vision-language models trained to understand 3D spatial relationships from 2D images.
|
| 17 |
+
|
| 18 |
+
## Features
|
| 19 |
+
|
| 20 |
+
- **3D Location Understanding**: Determine which objects are closer or further from the camera
|
| 21 |
+
- **Orientation Analysis**: Understand which direction objects are facing
|
| 22 |
+
- **Relative Positioning**: Answer questions about object positions relative to each other
|
| 23 |
+
- **Step-by-step Reasoning**: The model provides detailed reasoning before answering
|
| 24 |
+
|
| 25 |
+
## How to Use
|
| 26 |
+
|
| 27 |
+
1. Upload an image
|
| 28 |
+
2. Ask a question about spatial relationships in the image
|
| 29 |
+
3. The model will provide a detailed answer with reasoning
|
| 30 |
+
|
| 31 |
+
## Example Questions
|
| 32 |
+
|
| 33 |
+
- "Which object is further away from the camera? A. boat B. fire hydrant"
|
| 34 |
+
- "Are the kid and the teddy bear facing same or similar directions?"
|
| 35 |
+
- "If I stand at the recreational vehicle's position facing where it is facing, is the dog in front of me or behind me?"
|
| 36 |
+
|
| 37 |
+
The model is trained to provide answers in a structured format with reasoning enclosed in `<think>` tags and final answers in `\boxed{}`.
|
app.py
ADDED
|
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
import spaces
|
| 3 |
+
import torch
|
| 4 |
+
from PIL import Image
|
| 5 |
+
from transformers import AutoProcessor, Qwen2_5_VLForConditionalGeneration
|
| 6 |
+
|
| 7 |
+
MODEL_ID = "internlm/Spatial-SSRL-3B"
|
| 8 |
+
MAX_NEW_TOKENS = 2048
|
| 9 |
+
|
| 10 |
+
# Example questions from the examples
|
| 11 |
+
EXAMPLE_QUESTIONS = [
|
| 12 |
+
"Consider the real-world 3D location of the objects. Which object is further away from the camera? A. boat B. fire hydrant\n You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}.",
|
| 13 |
+
"Consider the real-world 3D orientations of the objects. Are the kid and the teddy bear facing same or similar directions, or very different directions? A. very different directions B. same or similar directions\n You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}.",
|
| 14 |
+
"Consider the real-world 3D locations and orientations of the objects. If I stand at the recreational vehicle's position facing where it is facing, is the dog in front of me or behind me? A. behind B. in front of\n You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}."
|
| 15 |
+
]
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def get_device() -> str:
|
| 19 |
+
return "cuda" if torch.cuda.is_available() else "cpu"
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
def select_dtype(device: str):
|
| 23 |
+
if device == "cuda":
|
| 24 |
+
if torch.cuda.is_bf16_supported():
|
| 25 |
+
return torch.bfloat16
|
| 26 |
+
return torch.float16
|
| 27 |
+
return torch.float32
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def load_model():
|
| 31 |
+
device = get_device()
|
| 32 |
+
dtype = select_dtype(device)
|
| 33 |
+
|
| 34 |
+
# Use device_map="auto" for proper GPU allocation with spaces.GPU decorator
|
| 35 |
+
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
|
| 36 |
+
MODEL_ID,
|
| 37 |
+
torch_dtype=dtype,
|
| 38 |
+
device_map="auto",
|
| 39 |
+
trust_remote_code=True,
|
| 40 |
+
)
|
| 41 |
+
|
| 42 |
+
processor = AutoProcessor.from_pretrained(MODEL_ID, trust_remote_code=True)
|
| 43 |
+
return model, processor
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
MODEL, PROCESSOR = load_model()
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
@spaces.GPU
|
| 50 |
+
@torch.inference_mode()
|
| 51 |
+
def answer_question(image: Image.Image, question: str):
|
| 52 |
+
if image is None:
|
| 53 |
+
return "Please upload an image.", 0
|
| 54 |
+
|
| 55 |
+
if not question or question.strip() == "":
|
| 56 |
+
return "Please enter a question.", 0
|
| 57 |
+
|
| 58 |
+
try:
|
| 59 |
+
# Validate image
|
| 60 |
+
if not isinstance(image, Image.Image):
|
| 61 |
+
return "Error: Invalid image format", 0
|
| 62 |
+
|
| 63 |
+
# Check image size (warn if too large)
|
| 64 |
+
max_size = 4096
|
| 65 |
+
if image.width > max_size or image.height > max_size:
|
| 66 |
+
# Resize if too large to prevent OOM
|
| 67 |
+
image.thumbnail((max_size, max_size), Image.Resampling.LANCZOS)
|
| 68 |
+
|
| 69 |
+
device = MODEL.device
|
| 70 |
+
messages = [
|
| 71 |
+
{
|
| 72 |
+
"role": "user",
|
| 73 |
+
"content": [
|
| 74 |
+
{"type": "image"},
|
| 75 |
+
{"type": "text", "text": question},
|
| 76 |
+
],
|
| 77 |
+
}
|
| 78 |
+
]
|
| 79 |
+
|
| 80 |
+
prompt_text = PROCESSOR.apply_chat_template(
|
| 81 |
+
messages, tokenize=False, add_generation_prompt=True
|
| 82 |
+
)
|
| 83 |
+
|
| 84 |
+
inputs = PROCESSOR(
|
| 85 |
+
text=[prompt_text],
|
| 86 |
+
images=[image],
|
| 87 |
+
return_tensors="pt",
|
| 88 |
+
).to(device)
|
| 89 |
+
|
| 90 |
+
generated_ids = MODEL.generate(
|
| 91 |
+
**inputs,
|
| 92 |
+
max_new_tokens=MAX_NEW_TOKENS,
|
| 93 |
+
do_sample=False,
|
| 94 |
+
)
|
| 95 |
+
|
| 96 |
+
generated_ids_trimmed = [
|
| 97 |
+
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
| 98 |
+
]
|
| 99 |
+
output_text = PROCESSOR.batch_decode(
|
| 100 |
+
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
| 101 |
+
)
|
| 102 |
+
answer = output_text[0].strip()
|
| 103 |
+
|
| 104 |
+
input_ids = inputs.get("input_ids")
|
| 105 |
+
input_length = input_ids.shape[-1] if input_ids is not None else 0
|
| 106 |
+
total_length = generated_ids.shape[-1]
|
| 107 |
+
num_generated_tokens = max(total_length - input_length, 0)
|
| 108 |
+
|
| 109 |
+
return answer, int(num_generated_tokens)
|
| 110 |
+
|
| 111 |
+
except torch.cuda.OutOfMemoryError:
|
| 112 |
+
torch.cuda.empty_cache()
|
| 113 |
+
return "Error: Out of GPU memory. Please try with a smaller image.", 0
|
| 114 |
+
except Exception as e:
|
| 115 |
+
return f"Error generating answer: {str(e)}", 0
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
def load_example(example_idx):
|
| 119 |
+
"""Load example image and question based on index"""
|
| 120 |
+
example_images = [
|
| 121 |
+
"./examples/eg1.jpg",
|
| 122 |
+
"./examples/eg2.jpg",
|
| 123 |
+
"./examples/eg3.jpg"
|
| 124 |
+
]
|
| 125 |
+
if 0 <= example_idx < len(EXAMPLE_QUESTIONS):
|
| 126 |
+
return Image.open(example_images[example_idx]), EXAMPLE_QUESTIONS[example_idx]
|
| 127 |
+
return None, ""
|
| 128 |
+
|
| 129 |
+
|
| 130 |
+
with gr.Blocks(title="Spatial-SSRL Spatial Reasoning") as demo:
|
| 131 |
+
gr.Markdown("# π Spatial-SSRL: Spatial Reasoning with Vision-Language Models")
|
| 132 |
+
gr.Markdown("### Understanding 3D Spatial Relationships from 2D Images")
|
| 133 |
+
gr.Markdown("β¨ Upload an image and ask questions about spatial relationships, locations, and orientations! β¨")
|
| 134 |
+
|
| 135 |
+
with gr.Row():
|
| 136 |
+
with gr.Column():
|
| 137 |
+
image_input = gr.Image(type="pil", label="Input Image")
|
| 138 |
+
question_input = gr.Textbox(
|
| 139 |
+
label="Question",
|
| 140 |
+
placeholder="Ask a question about spatial relationships in the image...",
|
| 141 |
+
lines=4
|
| 142 |
+
)
|
| 143 |
+
submit_button = gr.Button("Submit", variant="primary")
|
| 144 |
+
|
| 145 |
+
with gr.Column():
|
| 146 |
+
answer_output = gr.Textbox(label="Answer", lines=10)
|
| 147 |
+
token_output = gr.Number(label="Generated Tokens", precision=0)
|
| 148 |
+
|
| 149 |
+
submit_button.click(
|
| 150 |
+
fn=answer_question,
|
| 151 |
+
inputs=[image_input, question_input],
|
| 152 |
+
outputs=[answer_output, token_output],
|
| 153 |
+
show_progress=True,
|
| 154 |
+
)
|
| 155 |
+
|
| 156 |
+
gr.Markdown("### πΈ Example Questions")
|
| 157 |
+
gr.Markdown("Click on an example below to load it:")
|
| 158 |
+
|
| 159 |
+
with gr.Row():
|
| 160 |
+
example1_btn = gr.Button("Example 1: Boat vs Fire Hydrant")
|
| 161 |
+
example2_btn = gr.Button("Example 2: Kid and Teddy Bear")
|
| 162 |
+
example3_btn = gr.Button("Example 3: RV and Dog")
|
| 163 |
+
|
| 164 |
+
example1_btn.click(
|
| 165 |
+
fn=lambda: load_example(0),
|
| 166 |
+
inputs=[],
|
| 167 |
+
outputs=[image_input, question_input],
|
| 168 |
+
)
|
| 169 |
+
|
| 170 |
+
example2_btn.click(
|
| 171 |
+
fn=lambda: load_example(1),
|
| 172 |
+
inputs=[],
|
| 173 |
+
outputs=[image_input, question_input],
|
| 174 |
+
)
|
| 175 |
+
|
| 176 |
+
example3_btn.click(
|
| 177 |
+
fn=lambda: load_example(2),
|
| 178 |
+
inputs=[],
|
| 179 |
+
outputs=[image_input, question_input],
|
| 180 |
+
)
|
| 181 |
+
|
| 182 |
+
gr.Examples(
|
| 183 |
+
examples=[
|
| 184 |
+
["./examples/eg1.jpg", EXAMPLE_QUESTIONS[0]],
|
| 185 |
+
["./examples/eg2.jpg", EXAMPLE_QUESTIONS[1]],
|
| 186 |
+
["./examples/eg3.jpg", EXAMPLE_QUESTIONS[2]],
|
| 187 |
+
],
|
| 188 |
+
inputs=[image_input, question_input],
|
| 189 |
+
outputs=[answer_output, token_output],
|
| 190 |
+
fn=answer_question,
|
| 191 |
+
cache_examples=True,
|
| 192 |
+
label="Complete Examples"
|
| 193 |
+
)
|
| 194 |
+
|
| 195 |
+
gr.Markdown("### About")
|
| 196 |
+
gr.Markdown(
|
| 197 |
+
"""
|
| 198 |
+
This demo showcases spatial reasoning capabilities of vision-language models. The model can:
|
| 199 |
+
- Understand 3D spatial relationships from 2D images
|
| 200 |
+
- Reason about object locations (near/far, front/behind)
|
| 201 |
+
- Analyze object orientations and facing directions
|
| 202 |
+
- Provide step-by-step reasoning before answering
|
| 203 |
+
|
| 204 |
+
The model is trained to provide answers in a structured format with reasoning enclosed in `<think>` tags and final answers in `\\boxed{}`.
|
| 205 |
+
"""
|
| 206 |
+
)
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
demo.launch()
|
examples/QA.txt
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
eg1
|
| 2 |
+
question: "Consider the real-world 3D location of the objects. Which object is further away from the camera? A. boat B. fire hydrant\n You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}."
|
| 3 |
+
|
| 4 |
+
Expected response: "<think>\nTo determine which object is further away from the camera, I need to assess the depth cues in the image. The boat appears to be in the water, which is typically further away than objects on the land. The fire hydrant is on the ground, closer to the viewer. Therefore, the boat is further away from the camera.\n</think>\n\\boxed{A. boat}"
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
eg2
|
| 8 |
+
question: "Consider the real-world 3D orientations of the objects. Are the kid and the teddy bear facing same or similar directions, or very different directions? A. very different directions B. same or similar directions\n You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}."
|
| 9 |
+
|
| 10 |
+
Expected response: "<think>\nTo determine the orientation of the kid and the teddy bear, I need to consider their positions relative to each other. The kid is facing forward, towards the camera, with their head slightly tilted. The teddy bear is positioned in front of the kid, also facing forward. Both the kid and the teddy bear are oriented in the same direction, which is towards the camera.\n\nTherefore, the final answer is that they are facing the same or similar directions.\n</think>\n\\boxed{B}"
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
eg3
|
| 14 |
+
question: "Consider the real-world 3D locations and orientations of the objects. If I stand at the recreational vehicle's position facing where it is facing, is the dog in front of me or behind me? A. behind B. in front of\n You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}."
|
| 15 |
+
|
| 16 |
+
Expected response: "<think>\nTo determine the position of the dog relative to the recreational vehicle, I need to consider the orientation of the RV and the dog's position in the image.\n\n1. The recreational vehicle is facing away from the camera, as indicated by the direction of its wheels and the orientation of the trailer.\n2. The dog is positioned in the foreground, closer to the camera than the RV.\n3. Since the dog is in front of the RV from the camera's perspective, it is behind the RV from the RV's perspective.\n\nTherefore, the dog is behind the recreational vehicle if I stand at the RV's position facing where it is facing.\n</think>\n\\boxed{A}"
|
examples/eg1.jpg
ADDED
|
Git LFS Details
|
examples/eg2.jpg
ADDED
|
Git LFS Details
|
examples/eg3.jpg
ADDED
|
Git LFS Details
|
requirements.txt
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio==5.49.1
|
| 2 |
+
spaces
|
| 3 |
+
transformers
|
| 4 |
+
torch
|
| 5 |
+
accelerate
|
| 6 |
+
torchvision
|
| 7 |
+
Pillow
|
| 8 |
+
sentencepiece
|