Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<p align="center" width="100%">
|
| 2 |
+
<a target="_blank"><img src="figs/FysicsWorld-logo.png" alt="" style="width: 50%; min-width: 200px; display: block; margin: auto;"></a>
|
| 3 |
+
</p>
|
| 4 |
+
|
| 5 |
+
<div align="center">
|
| 6 |
+
<br>
|
| 7 |
+
<h1>FysicsWorld: A Unified Full-Modality Benchmark for Any-to-Any Understanding, Generation, and Reasoning</h1>
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
<font size=3><div align='center' > [[🏠 Project Page](https://github.com/Fysics-AI/FysicsWorld)] [[📖 arXiv Paper](https://arxiv.org/pdf/2512.XXXX)] [[🤗 Dataset](https://huggingface.co/datasets/Fysics-AI/FysicsWorld)] [[🏆 Leaderboard](https://huggingface.co/spaces/Fysics-AI/FysicsWorld-Leaderboard)] </div></font>
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
</div>
|
| 14 |
+
|
| 15 |
+
## 🚀 News
|
| 16 |
+
* **`2025.12.14`** We release [***FysicsWorld***](https://huggingface.co/datasets/Fysics-AI/FysicsWorld), the first unified full-modality benchmark that supports bidirectional input–output across image, video, audio, and text, enabling comprehensive any-to-any evaluation across understanding, generation, and reasoning.
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
## 🎯 ***FysicsWorld*** Overview
|
| 20 |
+
<img src="figs/fig-teaser.jpg" width="100%" height="100%">
|
| 21 |
+
|
| 22 |
+
We introduce ***FysicsWorld***, the **first** unified full-modality benchmark that supports bidirectional input–output across *image, video, audio, and text*, enabling comprehensive any-to-any evaluation across understanding, generation, and reasoning. Our systematic design spans uni-modal perception tasks to fusion-dependent reasoning under strong cross-modal coupling, allowing us to diagnose, with unprecedented clarity, the limitations and emerging strengths of modern multimodal and omni-modal architectures. In contrast to existing omni-modal and multi-modal benchmarks, our ***FysicsWorld*** has several advantages:
|
| 23 |
+
|
| 24 |
+
* **Diversity and High Quality**. ***FysicsWorld*** is characterized by **8 "*multi*"** properties, reflecting its comprehensive coverage, diversity, and robustness, namely:
|
| 25 |
+
*multi-dimensional* (understanding, generation, reasoning, voice interaction), *multi-modal* (text, image, video, audio as both inputs and outputs), *multi-task* (16 primary tasks, 200+ sub-tasks), *multi-source* (3,268 samples from 40+ data sources and curated web data), *multi-domain* (170+ fine-grained open-domain categories), *multi-type* (closed-ended, open-ended, multiple-choice question, and image/video/audio generation), *multi-target* (evaluates Omni-LLMs, MLLMs, modality-specific models, unified understanding–generation models), and *multi-assurance* (multi-stage quality control strategies).
|
| 26 |
+
|
| 27 |
+
* **Fusion-Dependent Cross-Modal Reasoning**. We propose a method for omni-modal data construction, which is named **C**ross-**M**odal **C**omplementarity **S**creening (**CMCS**) strategy, which ensures that our tasks maintain strong cross-modal coupling, preventing single-modality shortcuts and enforcing true synergistic perception of omni-modality.
|
| 28 |
+
|
| 29 |
+
* **Speech-Driven Cross-Modal Interaction**. To support natural, multimodal communication and interaction, we develop a speech-grounded multimodal data construction pipeline that ensures both linguistic fluency and semantic fidelity in voice-based interactions, including 20+ authentic voices and tones.
|
| 30 |
+
|
| 31 |
+
Based on ***FysicsWorld***, we extensively evaluate various advanced models, including Omni-LLMs, MLLMs, modality-specific models, and unified understanding–generation models. By establishing a unified benchmark and highlighting key capability gaps, FysicsWorld provides not only a foundation for evaluating emerging multimodal systems but also a roadmap for the next generation of full-modality architectures capable of genuinely holistic perception, reasoning, and interaction.
|
| 32 |
+
|
| 33 |
+
<p align="center">
|
| 34 |
+
<img src="figs/fig-statiscs.jpg" width="100%" height="100%">
|
| 35 |
+
</p>
|
| 36 |
+
|
| 37 |
+
## 🔍 Dataset Download
|
| 38 |
+
The full dataset, including associated multimedia files (images, videos, and audio), can be downloaded from [here](https://huggingface.co/datasets/Fysics-AI/FysicsWorld).
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
## 🔮 Evaluation
|
| 42 |
+
|
| 43 |
+
To ensure a fair and standardized evaluation protocol, we release the full ***FysicsWorld*** dataset with ground-truth answers withheld, along with a test-mini subset (300 samples) that includes answers for local validation and debugging. You can find the QA data in [./data](https://github.com/Fysics-AI/FysicsWorld/tree/main/data) (full ***FysicsWorld***) and [./test-mini](https://github.com/Fysics-AI/FysicsWorld/tree/main/test-mini) (test-mini), respectively.
|
| 44 |
+
|
| 45 |
+
🕹️ **Usage**:
|
| 46 |
+
|
| 47 |
+
1. Download the full FysicsWorld dataset from [here](https://huggingface.co/datasets/Fysics-AI/FysicsWorld).
|
| 48 |
+
2. Run inference using your model on the provided questions.
|
| 49 |
+
3. Format the model responses according to the required [submission format](https://github.com/Fysics-AI/FysicsWorld/blob/main/eval/submission_format.json).
|
| 50 |
+
4. Send the formatted responses to *t1.jiangyue@outlook.com*. We will periodically update the corresponding scores on the leaderboard.
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
## 📈 Experimental Results
|
| 55 |
+
- **Evaluation results of Omni-LLMs and proprietary MLLMs on image-centric omni-modal tasks**
|
| 56 |
+
|
| 57 |
+
<p align="center">
|
| 58 |
+
<img src="figs/tab-image.png" width="90%" height="100%">
|
| 59 |
+
</p>
|
| 60 |
+
|
| 61 |
+
*Task abbreviations:*
|
| 62 |
+
Task1-1 (Image Understanding), Task2-1 (Speech-Driven Image Understanding), Task2-2 (Image–Audio Contextual Reasoning), Task2-3 (Speech-Based QA on Image Content), Task2-4 (Speech Generation from a Person in an Image), and Task2-5 (Audio Matching from Image Context).
|
| 63 |
+
|
| 64 |
+
- **Evaluation results of Omni-LLMs and proprietary MLLMs on video-centric omni-modal tasks.**
|
| 65 |
+
|
| 66 |
+
<p align="center">
|
| 67 |
+
<img src="figs/tab-video.png" width="90%" height="100%">
|
| 68 |
+
</p>
|
| 69 |
+
|
| 70 |
+
*Task abbreviations:*
|
| 71 |
+
Task1-2 (Video Understanding), Task3-1 (Speech-Driven Video Understanding), Task3-2 (Video–Audio Contextual Reasoning), Task3-3 (Speech-Based QA on Video Content), Task3-4 (Speech Generation from a Person in an Video), Task3-5 (Audio Matching from Video Context), and Task3-6 (Next-Action Prediction from Video Sequences and Current Visual State).
|
| 72 |
+
|
| 73 |
+
- **Evaluation results of open-source MLLMs on modality-supported tasks.**
|
| 74 |
+
|
| 75 |
+
<p align="center">
|
| 76 |
+
<img src="figs/fig-open-mllm.jpg" width="60%" height="100%">
|
| 77 |
+
</p>
|
| 78 |
+
|
| 79 |
+
*Task abbreviations:*
|
| 80 |
+
Task1-1 (Image Understanding), Task1-2 (Video Understanding), and Task3-6 (Next-Action Prediction from Video Sequences and Current Visual State).
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
- **Evaluation results of various models on (a) Audio Reasoning and (b) Video Generation.**
|
| 84 |
+
|
| 85 |
+
<p align="center">
|
| 86 |
+
<img src="figs/fig-exp-audio-video.jpg" width="90%" height="100%">
|
| 87 |
+
</p>
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
## 📖 Citation
|
| 91 |
+
|
| 92 |
+
If you find ***FysicsWorld*** helpful for your research, please consider citing our work. Thanks!
|
| 93 |
+
|
| 94 |
+
```bibtex
|
| 95 |
+
@article{jiang2025fysicsworld,
|
| 96 |
+
title={FysicsWorld: A Unified Full-Modality Benchmark for Any-to-Any Understanding, Generation, and Reasoning},
|
| 97 |
+
author={Jiang, Yue and Yang, Dingkang and Han, Minghao and Han, Jinghang and Chen, Zizhi and Liu, Yizhou and Li, Mingcheng and Zhai, Peng and Zhang, Lihua},
|
| 98 |
+
journal={arXiv preprint arXiv:2512.XXXX},
|
| 99 |
+
year={2025}
|
| 100 |
+
}
|
| 101 |
+
```
|