File size: 1,070 Bytes
e601023
 
 
 
 
 
 
 
 
 
 
 
 
 
fc151cb
 
 
 
0dde35d
 
 
236b56e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: cc-by-4.0
task_categories:
- any-to-any
language:
- en
tags:
- continual learning
---

# ContinuaL-NExT Benchmark Card

## Dataset details

This benchmark is built upon a collection of widely used and publicly available multimodal datasets for both understanding and generation tasks, including VQAv2, ImageNet, Flickr30k, OCR-VQA, RefCOCO, and HQEdit.

This benchmark is adopted to evaluate the **multimodal continual learning** ability of **unified generation and understanding MLLMs**.

Specific information please kindly refer to our code (https://github.com/JingyangQiao/MAGE) and paper (Coming Soon).

## Acknowledgement
Some datasets (VQAv2, OCR-VQA, RefCOCO and ImageNet) in this benchmark are modified versions of **[CoIN]** by [Chen et al.] ([https://huggingface.co/datasets/Zacks-Chen/CoIN]), available under CC-BY-4.0. Modifications include adaptation and integration with new data to form a new benchmark. Full attribution to the original authors is maintained. We thank for the authors have made the contributions to the open-source community.