| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - text-classification |
| | - text-generation |
| | - image-classification |
| | - image-to-text |
| | language: |
| | - zh |
| | - en |
| | size_categories: |
| | - 1M<n<10M |
| | --- |
| | |
| | This is the HuggingFace repository of the paper named [MOON: Generative MLLM-based Multimodal Representation Learning for E-commerce Product Understanding](https://arxiv.org/abs/2508.11999v5) in WSDM 2026 (oral). |
| |
|
| | In this paper, we argue that generative Multimodal Large Language Models (MLLMs) hold significant potential for improving product representation learning. |
| | We propose the first generative MLLM-based model named MOON for product representation learning. |
| |
|
| | Furthermore, we contruct and publish a large-scale real-world multimodal benchmark named **MM-Bench-E-Commerce(MBE)** for product understanding, which supports a wide range of downstream tasks, including various cross-modal retrieval, multi-granularity product classification, attribute prediction and so on. |
| | Our benchmark comprises 2.7M training samples and 410k evaluation samples, all collected from real-world products and user purchases on Taobao, one of the largest e-commerce platforms in China. |
| | The retrieval tasks involved are grounded in actual purchase behaviors rather than trivial category matching, thereby offering a more realistic assessment of the product understanding ability in practical applications. |
| |
|
| | ``` |
| | @article{zhang2025moon, |
| | title={MOON: Generative MLLM-based Multimodal Representation Learning for E-commerce Product Understanding}, |
| | author={Zhang, Daoze and Fu, Chenghan and Nie, Zhanheng and Liu, Jianyu and Guan, Wanxian and Gao, Yuan and Song, Jun and Wang, Pengjie and Xu, Jian and Zheng, Bo}, |
| | journal={arXiv preprint arXiv:2508.11999}, |
| | year={2025} |
| | } |
| | ``` |