Spaces:
Running
Running
Commit ·
10299a5
1
Parent(s): 374594b
feat: adapt to MMIE
Browse files- src/about.py +1 -0
src/about.py
CHANGED
|
@@ -29,6 +29,7 @@ TITLE = """<h1 align="center" id="space-title">MMIE</h1>"""
|
|
| 29 |
INTRODUCTION_TEXT = """
|
| 30 |
# MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models
|
| 31 |
We present MMIE, a Massive Multimodal Interleaved understanding Evaluation benchmark, designed for Large Vision-Language Models (LVLMs). MMIE offers a robust framework for evaluating the interleaved comprehension and generation capabilities of LVLMs across diverse fields, supported by reliable automated metrics.
|
|
|
|
| 32 |
[Website](https://mmie-bench.github.io) | [Code](https://github.com/Lillianwei-h/MMIE-Eval) | [Dataset](https://huggingface.co/datasets/MMIE/MMIE) | [Results](https://huggingface.co/spaces/MMIE/Leaderboard) | [Eval Model](https://huggingface.co/MMIE/MMIE-Eval) | [Paper]()
|
| 33 |
"""
|
| 34 |
|
|
|
|
| 29 |
INTRODUCTION_TEXT = """
|
| 30 |
# MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models
|
| 31 |
We present MMIE, a Massive Multimodal Interleaved understanding Evaluation benchmark, designed for Large Vision-Language Models (LVLMs). MMIE offers a robust framework for evaluating the interleaved comprehension and generation capabilities of LVLMs across diverse fields, supported by reliable automated metrics.
|
| 32 |
+
|
| 33 |
[Website](https://mmie-bench.github.io) | [Code](https://github.com/Lillianwei-h/MMIE-Eval) | [Dataset](https://huggingface.co/datasets/MMIE/MMIE) | [Results](https://huggingface.co/spaces/MMIE/Leaderboard) | [Eval Model](https://huggingface.co/MMIE/MMIE-Eval) | [Paper]()
|
| 34 |
"""
|
| 35 |
|