Continual-NExT / README.md
jingyang's picture
Update README.md
236b56e verified
metadata
license: cc-by-4.0
task_categories:
  - any-to-any
language:
  - en
tags:
  - continual learning

ContinuaL-NExT Benchmark Card

Dataset details

This benchmark is built upon a collection of widely used and publicly available multimodal datasets for both understanding and generation tasks, including VQAv2, ImageNet, Flickr30k, OCR-VQA, RefCOCO, and HQEdit.

This benchmark is adopted to evaluate the multimodal continual learning ability of unified generation and understanding MLLMs.

Specific information please kindly refer to our code (https://github.com/JingyangQiao/MAGE) and paper (Coming Soon).

Acknowledgement

Some datasets (VQAv2, OCR-VQA, RefCOCO and ImageNet) in this benchmark are modified versions of [CoIN] by [Chen et al.] ([https://huggingface.co/datasets/Zacks-Chen/CoIN]), available under CC-BY-4.0. Modifications include adaptation and integration with new data to form a new benchmark. Full attribution to the original authors is maintained. We thank for the authors have made the contributions to the open-source community.