license: apache-2.0
size_categories:
- 100K<n<1M
Abstract
The latent representation in learned image compression encompasses channel-wise, local spatial, and global spatial correlations, which are essential for the entropy model to capture for conditional entropy minimization. Efficiently capturing these contexts within a single entropy model, especially in high-resolution image coding, presents a challenge due to the computational complexity of existing global context modules. To address this challenge, we propose the Linear Complexity Multi-Reference Entropy Model (MEM++). Specifically, the latent representation is partitioned into multiple slices. For channel-wise contexts, previously compressed slices serve as the context for compressing a particular slice. For local contexts, we introduce a shifted-window-based checkerboard attention module. This module ensures linear complexity without sacrificing performance. For global contexts, we propose a linear complexity attention mechanism. It captures global correlations by decomposing the softmax operation, enabling the implicit computation of attention maps from previously decoded slices. Using MEM++ as the entropy model, we develop the image compression method MLIC++. Extensive experimental results demonstrate that MLIC++ achieves state-of-the-art performance, reducing BD-rate by 13.39% on the Kodak dataset compared to VTM-17.0 in Peak Signal-to-Noise Ratio (PSNR). Furthermore, MLIC++ exhibits linear computational complexity and memory consumption with resolution, making it highly suitable for high-resolution image coding. Code and pre-trained models are available at https://github.com/JiangWeibeta/MLIC. Training dataset is available at https://huggingface.co/datasets/Whiteboat/MLIC-Train-100K.
Dataset
This dataset is compressed into 37 volumes using 7z.
Citation
If you use this dataset (MLIC-Train-100K) in your research or project:
- Please kindly Mention the Dataset by including the name "MLIC-Train-100K" in your paper.
- Please kindly Cite our MLIC, MLIC++ and MLICv2 using the following reference.
Thank you!
MLIC
@inproceedings{jiang2023mlic,
title={MLIC: Multi-Reference Entropy Model for Learned Image Compression},
author={Jiang, Wei and Yang, Jiayu and Zhai, Yongqi and Ning, Peirong and Gao, Feng and Wang, Ronggang},
doi = {10.1145/3581783.3611694},
booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
pages={7618--7627},
year={2023}
}
MLIC ++
@inproceedings{jiang2023mlicpp,
title={MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression},
author={Jiang, Wei and Wang, Ronggang},
booktitle={ICML 2023 Workshop Neural Compression: From Information Theory to Applications},
year={2023},
url={https://openreview.net/forum?id=hxIpcSoz2t}
}
MLICv2
@article{jiang2025mlicv2,
title={MLICv2: Enhanced Multi-Reference Entropy Modeling for Learned Image Compression},
author={Jiang, Wei and Zhai, Yongqi and Yang, Jiayu and Gao, Feng and Wang, Ronggang},
journal={arXiv preprint arXiv:2504.19119},
year={2025}
}