--- license: apache-2.0 language: - en base_model: - internlm/Spatial-SSRL-Qwen3VL-4B pipeline_tag: image-text-to-text library_name: transformers tags: - text-generation-inference - fp8 - vllm --- ![1](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/aWSJpDL6keXNMCKxkaPBt.png) # **Spatial-SSRL-Qwen3VL-4B-FP8** > **Spatial-SSRL-Qwen3VL-4B-FP8** is an FP8-compressed variant built on top of **internlm/Spatial-SSRL-Qwen3VL-4B**. This edition applies **BF16 · FP8 (F8_E4M3)** precision formats to reduce memory footprint and increase inference throughput, while preserving the spatial reasoning and multimodal understanding strengths of the original 4B architecture. > > The base **Spatial-SSRL-Qwen3VL-4B** model is a spatially enhanced vision-language model built upon **Qwen3-VL-4B-Instruct**. It integrates **Spatial-SSRL**, a lightweight self-supervised reinforcement learning paradigm designed to scale RLVR efficiently. The model demonstrates strong spatial intelligence while maintaining the original general visual capabilities of its base architecture. > [!important] > FP8 (8-bit floating point) weight and activation quantization using hardware acceleration on GPUs – [FP8 W8A8](https://docs.vllm.ai/en/stable/features/quantization/fp8/). > Quantization W8A8 FP8-dynamic recipe – [examples](https://github.com/vllm-project/llm-compressor/tree/main/examples/quantization_w8a8_fp8). ## Key Highlights * **BF16 · FP8 (F8_E4M3) Compression**: Transformer Engine based FP8 quantization reduces VRAM usage while maintaining spatial reasoning fidelity. * **Spatial-SSRL Optimization**: Lightweight self-supervised reinforcement learning enhances spatial grounding and reasoning. * **4B Vision-Language Backbone**: Efficient multimodal performance with balanced compute and memory requirements. * **Strong Spatial Intelligence**: Improved object localization, spatial relationships, depth reasoning, and geometric understanding. * **Preserved General Vision Capabilities**: Retains captioning, VQA, and multimodal dialogue strengths of Qwen3-VL-4B-Instruct. * **RLVR Compatible Framework**: Designed for scalable reinforcement learning with verifiable rewards. * **Optimized Deployment**: FP8 compression enables smoother inference on Hopper and other compatible GPU architectures. ## About Spatial-SSRL **Spatial-SSRL** is a lightweight, tool-free framework naturally compatible with the RLVR training paradigm and easy to extend to a wide range of pretext tasks. The framework currently formulates **five spatially oriented tasks**, requiring only standard RGB or RGB-D images. It is modular and extensible, enabling the research community to introduce new spatial pretext objectives that further strengthen large vision-language models. We welcome contributions to expand Spatial-SSRL with effective spatial reasoning tasks that enhance LVLM capabilities. ## Quick Start with Transformers ```python from transformers import Qwen3VLForConditionalGeneration, AutoProcessor from qwen_vl_utils import process_vision_info import torch # Load the Spatial SSRL Qwen3VL 4B FP8 model model = Qwen3VLForConditionalGeneration.from_pretrained( "internlm/Spatial-SSRL-Qwen3VL-4B-FP8", torch_dtype="auto", device_map="auto" ) processor = AutoProcessor.from_pretrained( "internlm/Spatial-SSRL-Qwen3VL-4B-FP8" ) messages = [ { "role": "user", "content": [ { "type": "image", "image": "sample_image.png", }, { "type": "text", "text": "Describe spatial relationships between objects and explain their relative positions." }, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ).to("cuda") generated_ids = model.generate(**inputs, max_new_tokens=1024) generated_ids_trimmed = [ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text) ``` ## Intended Use * **Spatial Reasoning Research**: Evaluating geometric and relational understanding in LVLMs. * **Robotics & Embodied AI**: Scene grounding and object relationship modeling. * **Autonomous Systems**: Depth and relative positioning analysis. * **3D Scene Understanding**: RGB and RGB-D based spatial inference. * **Multimodal Assistants**: Visual question answering with spatial awareness. ## Limitations & Risks > **Note**: This model focuses on spatial reasoning and multimodal understanding. * **Extreme Occlusions**: Performance may degrade when objects are heavily occluded. * **Unusual Camera Geometry**: Severe distortion or non-standard projections may reduce spatial accuracy. * **Hardware Requirements**: FP8 acceleration requires compatible GPU support for optimal throughput. * **Responsible Use**: Ensure compliance with safety and deployment regulations in robotics or autonomous systems contexts.