DongSky dreamerlin commited on
Commit
64e7be7
·
verified ·
1 Parent(s): 2f3ad7b

Update README.md (#2)

Browse files

- Update README.md (1192dd568a0550bd45eed7ee9d2b224258568728)


Co-authored-by: Lin <dreamerlin@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +76 -3
README.md CHANGED
@@ -1,3 +1,76 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - image-text-to-text
5
+ - visual-question-answering
6
+ - question-answering
7
+ - text-to-image
8
+ language:
9
+ - en
10
+ - zh
11
+ pretty_name: AEGIS
12
+ size_categories:
13
+ - 1K<n<10K
14
+ ---
15
+
16
+ # AEGIS: Exploring the Limit of World Knowledge Capabilities for Unified Multimodal Models
17
+
18
+ [\[📂 GitHub\]](https://github.com/DongSky/AEGIS) [\[🆕 Blog\]](https://m1saka.moe/aegis/) [\[📜 Paper\]](https://arxiv.org/abs/2601.00561)
19
+
20
+
21
+ ## Summary
22
+
23
+ ![teaser](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/NkES3_lNlSBzv2mVxPqgz.png)
24
+
25
+ The capability of Unified Multimodal Models (UMMs) to apply world knowledge across diverse tasks remains a critical, unresolved challenge. Existing benchmarks fall short, offering only siloed, single-task evaluations with limited diagnostic power. To bridge this gap, we propose AEGIS (i.e., **A**ssessing **E**diting, **G**eneration, **I**nterpretation-**U**nderstanding for **S**uper-intelligence), a comprehensive multi-task benchmark covering visual understanding, generation, editing, and interleaved generation. AEGIS comprises 1,050 challenging, manually-annotated questions spanning 21 topics (including STEM, humanities, daily life, etc.) and 6 reasoning types. To concretely evaluate the performance of UMMs in world knowledge scope without ambiguous metrics, we further propose Deterministic Checklist-based Evaluation (DCE), a protocol that replaces ambiguous prompt-based scoring with atomic “Y/N” judgments, to enhance evaluation reliability. Our extensive experiments reveal that most UMMs exhibit severe world knowledge deficits and that performance degrades significantly with complex reasoning. Additionally, simple plug-in reasoning modules can partially mitigate these vulnerabilities, highlighting a promising direction for future research. These results highlight the importance of world-knowledge-based reasoning as a critical frontier for UMMs.
26
+
27
+ ## Contribution
28
+
29
+
30
+ ![compare_table](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/Of0OaW6IdWmyFXnTi13kK.png)
31
+
32
+ The main contributions of this work are as follows:
33
+
34
+ - **Comprehensive Multi-Task Benchmark**: Assesses Visual Understanding, Generation, Editing, and Interleaved Generation simultaneously.
35
+ - **Extensive Knowledge Coverage**: 1,050 questions across 21 topics (STEM, Humanities, Daily Life) and 6 reasoning types.
36
+ - **Deterministic Evaluation (DCE)**: A novel checklist-based protocol that replaces ambiguous scores with atomic "Yes/No" judgments for reliability.
37
+ - **In-depth Diagnosis**: Reveals severe world knowledge deficits in SOTA UMMs and the impact of reasoning complexity.
38
+
39
+ ## Data Statistics
40
+
41
+
42
+ ![data_state](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/T0uXdrBJsGwYgtAOs3svL.png)
43
+
44
+ AEGIS covers three general domains (i.e., STEM, humanities, and daily life) with 21 diverse topics. Each topic
45
+ contains 15 prompts for visual understanding, generation, and editing, as well as 5 visual interleaved generation questions to measure complex generative capabilities. Furthermore, AEGIS incorporates six distinct reasoning types into the majority of its prompts, requiring UMMs to possess inherent reasoning capabilities to complete each request.
46
+
47
+
48
+ ## Visualization
49
+
50
+
51
+ ![dataset_view_0](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/JnCDaWoeVSi3qnIxRQQuC.png)
52
+
53
+ ![dataset_view_1](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/qNavKsYvpmbi0slpFBp7_.png)
54
+
55
+
56
+ ![dataset_view_3](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/3QkE0P8IRQTpMOswtUnTb.png)
57
+
58
+ ## Usage
59
+
60
+ Please refer to our [GitHub repository](https://github.com/DongSky/AEGIS) for usage details.
61
+
62
+ ## Citation
63
+
64
+ If you find this work helpful in your research, please consider citing:
65
+
66
+ ```bibtex
67
+ @misc{aegis,
68
+ title={AEGIS: Exploring the Limit of World Knowledge Capabilities for Unified Mulitmodal Models},
69
+ author={Jintao Lin, Bowen Dong, Weikang Shi, Chenyang Lei, Suiyun Zhang, Rui Liu, Xihui Liu},
70
+ year={2026},
71
+ eprint={2601.00561},
72
+ archivePrefix={arXiv},
73
+ primaryClass={cs.CV},
74
+ url={https://arxiv.org/abs/2601.00561},
75
+ }
76
+ ```