Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
EcomMMMU / README.md
xin10's picture
Update README.md
22a9bcb verified
metadata
license: cc-by-4.0
task_categories:
  - text-classification
language:
  - en
tags:
  - e-commerce
size_categories:
  - 100K<n<1M

Introduction

EcomMMMU is a large-scale multimodal multitask understanding dataset for e-commerce applications, containing 406,190 samples and 8,989,510 product images across 34 product categories. It is designed to systematically evaluate how multimodal large language models (MLLMs) utilize visual information in real-world shopping scenarios.

Unlike prior datasets that treat all images equally, EcomMMMU explicitly investigates when and how multiple product images contribute to understanding. It includes a specialized vision-salient subset (VSS), designed to test scenarios where textual information alone is insufficient and visuals are crucial.

Dataset Sources

Quick Start

Run the following command to get the data:

from datasets import load_dataset

dataset = load_dataset("NingLab/EcomMMMU")

License

Please check the license of each subset in our curated dataset ECInstruct.

Dataset License Type
Amazon Review Non listed
AmazonQA Non listed
Shopping Queries Dataset Apache License 2.0

Citation

@article{ling2025ecommmmu,
  title={EcomMMMU: Strategic Utilization of Visuals for Robust Multimodal E-Commerce Models},
  author={Ling, Xinyi and Du, Hanwen and Zhu, Zhihui and Ning, Xia},
  journal={arXiv preprint arXiv:2508.15721},
  year={2025}
}