File size: 10,983 Bytes
fa6c0d9 12fbcbe fa6c0d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
---
license: mit
---
# iFlyBot-VLM
<p align="center">
</a>  ⭐️ <a href="https://xuwenjie401.github.io/iFlyBot-VLA.github.io/">Project Page</a></a>   |   📖 <a href="https://arxiv.org/abs/2511.04976">Technical Report</a>  
</p>
## 🔥Introduction
We introduce iFlyBot-VLM, a general-purpose Vision-Language-Model (VLM) specifically engineered for the domain of Embodied Intelligence. The primary objective of this model is to bridge the cross-modal semantic gap between high-dimensional environmental perception and low-level robot motion control. It achieves this by abstracting complex scene information into an "Operational Language" that is body-agnostic and transferable, thus enabling seamless perception-to-action closed-loop coordination.
The architecture of iFlyBot-VLM is designed to realize four critical functional capabilities in the embodied domain:
**🧭Spatial Understanding and Metric**: Provides the model with the capacity to understand spatial relationships and perform relative position estimation among objects in the environment.
**🎯Interactive Target Grounding**: Supports diverse grounding mechanisms, including 2D/3D object detection in the visual modality, language-based object and spatial referring, and the prediction of critical object affordance regions.
**🤖Action Abstraction and Control Parameter Generation**: Generates outputs directly relevant to the manipulation domain, providing grasp poses and manipulation trajectories.
**📋Task Planning**: Leveraging the current scene Understanding, this module performs multi-step prediction to decompose complex tasks into a sequence of atomic skills, fundamentally supporting the robust execution of long-horizon tasks.
We anticipate that iFlyBot-VLM will serve as an efficient and scalable foundation model, driving the advancement of embodied AI from single-task capabilities toward generalist intelligent agents.
<div style="display: flex; gap: 1em; max-width: 100%;">
<img
src="https://huggingface.co/datasets/iFlyBot/iFlyBotVLM-Repo/resolve/main/images/smart_donut_chart.png"
style="flex: 1; max-width: 60%; height: auto; object-fit: contain;"
alt="iFlyBotVLM Traning Data"
>
<img
src="https://huggingface.co/datasets/iFlyBot/iFlyBotVLM-Repo/resolve/main/images/radar_performance.png"
style="flex: 1; max-width: 40%; height: auto; object-fit: contain;"
alt="iFlyBotVLM Performance"
>
</div>
## 🏗️Model Architecture
iFlyBot-VLM inherits the robust, three-stage "ViT-Projector-LLM" paradigm from established Vision-Language Models. It integrates a dedicated, incrementally pre-trained Visual Encoder with an advanced Language Model via a simple, randomly initialized MLP projector for efficient feature alignment.
The core enhancement lies in the ViT's Positional Encoding (PE) layer. Instead of relying solely on the original 448 dimension PE, we employ Bicubic Interpolation to intelligently upsample the learned positional embeddings from 448 to an enriched dimension of 896. This novel approach, termed Dimension-Expanded Position Embedding (DEPE), provides a significantly more nuanced spatial context vector for each visual token. This dimensional enrichment allows the model to capture more complex positional and relative spatial information without increasing the sequence length, thereby enhancing the model's ability to perform fine-grained visual reasoning and detailed localization tasks.

## 📊Model Performance
iFlyBot-VLM demonstrates superior performance across various challenging benchmarks.


iFlyBot-VLM-8B achieves state-of-the-art (SOTA) or near-SOTA performance on ten spatial Understanding, spatial perception, and temporal task planning benchmarks: Where2Place, Refspatial-bench, ShareRobot-affordance, ShareRobot-trajectory, BLINK(spatial), EmbSpatial, ERQA, CVBench, SAT, EgoPlan2.
## 🚀Quick Start
### Using 🤗 Transformers to Chat
We provide an example code to run `iFlyBot-VLM-8B` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
<details>
<summary>Python code</summary>
```python
import math
import numpy as np
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer,AutoConfig
from tqdm import tqdm
import json
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
class IflyRoboInference:
def __init__(self, model_path=''):
self.model = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
load_in_8bit=False,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map="balanced").eval() # "auto", "balanced"
self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
self.generation_config = dict(
do_sample=True,
temperature=0.5,
top_p = 0.0,
top_k = 1,
max_new_tokens=16384
)
def build_transform(self, input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(self, aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(self, image, min_num=1, max_num=12, image_size=896, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
target_aspect_ratio = self.find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(self, image_file, input_size=896, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = self.build_transform(input_size=input_size)
images = self.dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
def forward_multi_image(self, image_paths: list, question: dict):
pixel_values = []
num_patches_list = []
resize_size = 448
for i, image_path in enumerate(image_paths):
pixel_value = self.load_image(image_path, input_size=resize_size).to(torch.bfloat16).cuda()
pixel_values.append(pixel_value)
num_patches_list.append(pixel_value.size(0))
pixel_values = torch.cat(tuple(pixel_values), dim=0)
print(question)
response, history = self.model.chat(self.tokenizer, pixel_values, question["prompt"], self.generation_config, history=None, return_history=True)
print(response)
def test_spatial_from_blink():
hf_path = "iFlyBot/iFlyBotVLM"
ifly_robo_infer = IflyRoboInference(hf_path)
question = {
"idx": "val_Spatial_Relation_143",
"sub_task" : "Spatial Relation",
"prompt": "<image> Is the person behind the cup?\nSelect from the following choices.\n(A) yes\n(B) no.\nPlease answer directly with only the letter of the correct option and nothing else."
}
image_path = [
"./examples-images/val_Spatial_Relation_143_1.jpg"
]
ifly_robo_infer.forward_multi_image(image_path, question)
def test_visual_correspondence_from_blink():
hf_path = "iFlyBot/iFlyBotVLM"
ifly_robo_infer = IflyRoboInference(hf_path)
question = {
"idx": "val_Visual_Correspondence_1",
"sub_task" : "Visual Correspondence",
"prompt": "<image> <image> A point is circled on the first image, labeled with REF. We change the camera position or lighting and shoot the second image. You are given multiple red-circled points on the second image, choices of \"A, B, C, D\" are drawn beside each circle. Which point on the second image corresponds to the point in the first image? Select from the following options.\n(A) Point A\n(B) Point B\n(C) Point C\n(D) Point D.\nPlease answer directly with only the letter of the correct option and nothing else."
}
image_path = [
"./examples-images/val_Visual_Correspondence_1_1.jpg",
"./examples-images/val_Visual_Correspondence_1_2.jpg"
]
ifly_robo_infer.forward_multi_image(image_path, question)
if __name__ == '__main__':
test_spatial_from_blink()
test_visual_correspondence_from_blink()
test_task_plan_from_egoplan2()
```
</details>
|