Update pipeline tag to image-text-to-text
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,24 +1,25 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
| 5 |
metrics:
|
| 6 |
- accuracy
|
| 7 |
-
|
| 8 |
-
- llava-hf/llava-1.5-7b-hf
|
| 9 |
-
- OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-7B
|
| 10 |
-
base_model_relation: adapter
|
| 11 |
tags:
|
| 12 |
- router
|
| 13 |
- MLLM-CL
|
| 14 |
- llava
|
| 15 |
- internvl
|
| 16 |
- MR-LoRA
|
| 17 |
-
|
| 18 |
-
library_name: transformers
|
| 19 |
-
datasets:
|
| 20 |
-
- MLLM-CL/MLLM-CL-ReplayData
|
| 21 |
---
|
|
|
|
| 22 |
## MLLM-CL Benchmark Description
|
| 23 |
MLLM-CL is a novel benchmark encompassing domain and ability continual learning, where the former focuses on independently and identically distributed (IID) evaluation across evolving mainstream domains,
|
| 24 |
whereas the latter evaluates on non-IID scenarios with emerging model ability.
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- llava-hf/llava-1.5-7b-hf
|
| 4 |
+
- OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-7B
|
| 5 |
+
datasets:
|
| 6 |
+
- MLLM-CL/MLLM-CL-ReplayData
|
| 7 |
language:
|
| 8 |
- en
|
| 9 |
+
library_name: transformers
|
| 10 |
+
license: apache-2.0
|
| 11 |
metrics:
|
| 12 |
- accuracy
|
| 13 |
+
pipeline_tag: image-text-to-text
|
|
|
|
|
|
|
|
|
|
| 14 |
tags:
|
| 15 |
- router
|
| 16 |
- MLLM-CL
|
| 17 |
- llava
|
| 18 |
- internvl
|
| 19 |
- MR-LoRA
|
| 20 |
+
base_model_relation: adapter
|
|
|
|
|
|
|
|
|
|
| 21 |
---
|
| 22 |
+
|
| 23 |
## MLLM-CL Benchmark Description
|
| 24 |
MLLM-CL is a novel benchmark encompassing domain and ability continual learning, where the former focuses on independently and identically distributed (IID) evaluation across evolving mainstream domains,
|
| 25 |
whereas the latter evaluates on non-IID scenarios with emerging model ability.
|