Add model card for MLLM-4D
Browse filesHi! I'm Niels from the Hugging Face community team. This PR adds a model card for MLLM-4D, which includes:
- Metadata for the `video-text-to-text` pipeline and `transformers` library.
- Links to the paper and official GitHub repository.
- A brief description of the model's capabilities in 4D spatiotemporal reasoning.
- Sample usage instructions for running inference as found in the repository.
- Citation information for the paper.
README.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
pipeline_tag: video-text-to-text
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# MLLM-4D: Towards Visual-based Spatial-Temporal Intelligence
|
| 7 |
+
|
| 8 |
+
[**MLLM-4D**](https://github.com/GVCLab/MLLM-4D) is a comprehensive framework designed to bridge the gaps in training data curation and model post-training for spatiotemporal understanding and reasoning. It enables multimodal large language models (MLLMs) to perceive and reason about the evolution of 3D space over time from purely visual inputs.
|
| 9 |
+
|
| 10 |
+
- **Paper:** [MLLM-4D: Towards Visual-based Spatial-Temporal Intelligence](https://huggingface.co/papers/2603.00515)
|
| 11 |
+
- **Repository:** [https://github.com/GVCLab/MLLM-4D](https://github.com/GVCLab/MLLM-4D)
|
| 12 |
+
- **Project Page:** [https://github.com/GVCLab/MLLM-4D](https://github.com/GVCLab/MLLM-4D)
|
| 13 |
+
|
| 14 |
+
## Model Description
|
| 15 |
+
MLLM-4D achieves state-of-the-art spatiotemporal intelligence by focusing on the relationships between objects and the camera within 3D space. The model establishes foundational 4D understanding via Supervised Fine-Tuning (SFT) and further catalyzes 4D reasoning capabilities by employing Group Relative Policy Optimization (GRPO) with specialized Spatiotemporal Chain of Thought (ST-CoT) prompting. It achieves these capabilities using purely 2D RGB inputs without architectural modifications.
|
| 16 |
+
|
| 17 |
+
## Usage
|
| 18 |
+
To run the inference demo for MLLM-4D, please refer to the setup instructions in the [official repository](https://github.com/GVCLab/MLLM-4D) and use the following commands:
|
| 19 |
+
|
| 20 |
+
```bash
|
| 21 |
+
# for MLLM-4D-SFT
|
| 22 |
+
python scripts/inference.py --model_type "MLLM-4D-SFT" --model_path PATH-to-MLLM-4D-SFT
|
| 23 |
+
|
| 24 |
+
# for MLLM-4D-RFT
|
| 25 |
+
python scripts/inference.py --model_type "MLLM-4D-RFT" --model_path PATH-to-MLLM-4D-RFT
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
## Citation
|
| 29 |
+
If you find the work useful, please consider citing:
|
| 30 |
+
```bibtex
|
| 31 |
+
@article{yin2026mllm4d,
|
| 32 |
+
title={MLLM-4D: Towards Visual-based Spatial-Temporal Intelligence},
|
| 33 |
+
author={Yin, Xingyilang and Li, Chengzhengxu and Chang, Jiahao and Pun, Chi-Man and Cun, Xiaodong},
|
| 34 |
+
journal={arXiv preprint arXiv:2603.00515},
|
| 35 |
+
year={2026}
|
| 36 |
+
}
|
| 37 |
+
```
|