Improve dataset card: add metadata, paper link, and benchmark description
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,3 +1,48 @@
|
|
| 1 |
-
---
|
| 2 |
-
license:
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-to-image
|
| 5 |
+
- image-to-image
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
# GENIUS: Generative Fluid Intelligence Evaluation Suite
|
| 9 |
+
|
| 10 |
+
[**Project Page**](https://chawuciren11.github.io/GENIUS/) | [**Paper**](https://huggingface.co/papers/2602.11144) | [**GitHub**](https://github.com/arctanxarc/GENIUS)
|
| 11 |
+
|
| 12 |
+
**GENIUS** (**GEN** Fluid **I**ntelligence Eval**U**ation **S**uite) is a benchmark designed to rigorously assess **Generative Fluid Intelligence (GFI)** in Unified Multimodal Models (UMMs). Unlike existing benchmarks that predominantly focus on *Crystallized Intelligence* (recalling accumulated knowledge), GENIUS evaluates the capacity to induce patterns, reason through constraints, and adapt to novel scenarios on the fly.
|
| 13 |
+
|
| 14 |
+
## Benchmark Structure
|
| 15 |
+
|
| 16 |
+
The benchmark formalizes GFI as a synthesis of three core primitives:
|
| 17 |
+
- **Inducing Implicit Patterns**: e.g., inferring personalized visual preferences.
|
| 18 |
+
- **Executing Ad-hoc Constraints**: e.g., visualizing abstract metaphors.
|
| 19 |
+
- **Adapting to Contextual Knowledge**: e.g., simulating counter-intuitive physics.
|
| 20 |
+
|
| 21 |
+
The dataset is organized into five core dimensions:
|
| 22 |
+
- `implicit_pattern`
|
| 23 |
+
- `multi_semantic`
|
| 24 |
+
- `prior_conflicting`
|
| 25 |
+
- `symbolic_constraint`
|
| 26 |
+
- `visual_constraint`
|
| 27 |
+
|
| 28 |
+
## Evaluation
|
| 29 |
+
|
| 30 |
+
GENIUS establishes a rigorous standard for GFI, guiding the field beyond knowledge utilization toward dynamic, general-purpose reasoning. The evaluation suite uses an LMM-as-a-judge approach to score model outputs based on the provided dimensions. For detailed instructions on model inference and evaluation, please refer to the [official GitHub repository](https://github.com/arctanxarc/GENIUS).
|
| 31 |
+
|
| 32 |
+
## Citation
|
| 33 |
+
|
| 34 |
+
```bibtex
|
| 35 |
+
@misc{an2026geniusgenerativefluidintelligence,
|
| 36 |
+
title={GENIUS: Generative Fluid Intelligence Evaluation Suite},
|
| 37 |
+
author={Ruichuan An and Sihan Yang and Ziyu Guo and Wei Dai and Zijun Shen and Haodong Li and Renrui Zhang and Xinyu Wei and Guopeng Li and Wenshan Wu and Wentao Zhang},
|
| 38 |
+
year={2026},
|
| 39 |
+
eprint={2602.11144},
|
| 40 |
+
archivePrefix={arXiv},
|
| 41 |
+
primaryClass={cs.LG},
|
| 42 |
+
url={https://arxiv.org/abs/2602.11144},
|
| 43 |
+
}
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
## License
|
| 47 |
+
|
| 48 |
+
The dataset and code are released under **CC-BY-NC 4.0** and are intended for academic research only. Commercial use is not permitted.
|