Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,14 +1,13 @@
|
|
| 1 |
<!-- ---
|
| 2 |
license: MIT License
|
| 3 |
-
|
|
|
|
| 4 |
- understanding
|
| 5 |
-
- reasoning
|
| 6 |
- generation
|
| 7 |
-
|
| 8 |
-
-
|
| 9 |
-
language:
|
| 10 |
-
|
| 11 |
-
dataset_type: other
|
| 12 |
--- -->
|
| 13 |
<p align="center" width="100%">
|
| 14 |
<a target="_blank"><img src="figs/FysicsWorld-logo.png" alt="" style="width: 50%; min-width: 200px; display: block; margin: auto;"></a>
|
|
@@ -19,7 +18,14 @@ dataset_type: other
|
|
| 19 |
<h1>FysicsWorld: A Unified Full-Modality Benchmark for Any-to-Any Understanding, Generation, and Reasoning</h1>
|
| 20 |
|
| 21 |
|
| 22 |
-
<font size=3><div align='center' >
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
|
| 25 |
</div>
|
|
@@ -38,7 +44,7 @@ We introduce ***FysicsWorld***, the **first** unified full-modality benchmark th
|
|
| 38 |
|
| 39 |
* **Fusion-Dependent Cross-Modal Reasoning**. We propose a method for omni-modal data construction, which is named **C**ross-**M**odal **C**omplementarity **S**creening (**CMCS**) strategy, which ensures that our tasks maintain strong cross-modal coupling, preventing single-modality shortcuts and enforcing true synergistic perception of omni-modality.
|
| 40 |
|
| 41 |
-
* **Speech-Driven Cross-Modal Interaction**. To support natural, multimodal communication and interaction, we develop a speech-grounded multimodal data construction pipeline that ensures both linguistic fluency and semantic fidelity in voice-based interactions, including
|
| 42 |
|
| 43 |
Based on ***FysicsWorld***, we extensively evaluate various advanced models, including Omni-LLMs, MLLMs, modality-specific models, and unified understanding–generation models. By establishing a unified benchmark and highlighting key capability gaps, FysicsWorld provides not only a foundation for evaluating emerging multimodal systems but also a roadmap for the next generation of full-modality architectures capable of genuinely holistic perception, reasoning, and interaction.
|
| 44 |
|
|
|
|
| 1 |
<!-- ---
|
| 2 |
license: MIT License
|
| 3 |
+
tags:
|
| 4 |
+
- physics
|
| 5 |
- understanding
|
|
|
|
| 6 |
- generation
|
| 7 |
+
- reasoning
|
| 8 |
+
- multimodal
|
| 9 |
+
language:
|
| 10 |
+
- en
|
|
|
|
| 11 |
--- -->
|
| 12 |
<p align="center" width="100%">
|
| 13 |
<a target="_blank"><img src="figs/FysicsWorld-logo.png" alt="" style="width: 50%; min-width: 200px; display: block; margin: auto;"></a>
|
|
|
|
| 18 |
<h1>FysicsWorld: A Unified Full-Modality Benchmark for Any-to-Any Understanding, Generation, and Reasoning</h1>
|
| 19 |
|
| 20 |
|
| 21 |
+
<font size=3><div align='center' >
|
| 22 |
+
[[🏠 Project Page](https://github.com/Fysics-AI/FysicsWorld)]
|
| 23 |
+
[[📖 Paper](https://arxiv.org/pdf/2512.12756)]
|
| 24 |
+
[[🤗 Dataset](https://huggingface.co/datasets/Fysics-AI/FysicsWorld)]
|
| 25 |
+
[[👾 ModelScope](https://www.modelscope.cn/datasets/Fysics-AI/FysicsWorld)]
|
| 26 |
+
[[🏆 Leaderboard](https://huggingface.co/spaces/Fysics-AI/FysicsWorld-Leaderboard)]
|
| 27 |
+
[[🀄 中文版](README_zh.md)]
|
| 28 |
+
</div></font>
|
| 29 |
|
| 30 |
|
| 31 |
</div>
|
|
|
|
| 44 |
|
| 45 |
* **Fusion-Dependent Cross-Modal Reasoning**. We propose a method for omni-modal data construction, which is named **C**ross-**M**odal **C**omplementarity **S**creening (**CMCS**) strategy, which ensures that our tasks maintain strong cross-modal coupling, preventing single-modality shortcuts and enforcing true synergistic perception of omni-modality.
|
| 46 |
|
| 47 |
+
* **Speech-Driven Cross-Modal Interaction**. To support natural, multimodal communication and interaction, we develop a speech-grounded multimodal data construction pipeline that ensures both linguistic fluency and semantic fidelity in voice-based interactions, including 10+ authentic voices and tones.
|
| 48 |
|
| 49 |
Based on ***FysicsWorld***, we extensively evaluate various advanced models, including Omni-LLMs, MLLMs, modality-specific models, and unified understanding–generation models. By establishing a unified benchmark and highlighting key capability gaps, FysicsWorld provides not only a foundation for evaluating emerging multimodal systems but also a roadmap for the next generation of full-modality architectures capable of genuinely holistic perception, reasoning, and interaction.
|
| 50 |
|