LoC-LIC: Low Complexity Learned Image Coding Using Hierarchical Feature Transforms

Authors

  • Ayman A. Ameen1, Thomas Richter2, André Kaup1

1Multimedia Communications and Signal Processing, Friedrich-Alexander University Erlangen-Nürnberg, Germany
2Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany

Webpage | Full Paper | Hugging Face | BibTeX

Model

Abstract

Learned image compression has shown strong rate-distortion performance, yet adoption is limited by computational complexity rather than quality. The main bottleneck is the high-resolution convolutional layers that map pixels to feature maps. We address this with a hierarchical feature extraction transform that uses fewer channels at high spatial resolutions and increases channels only as spatial dimensions shrink. This cuts the forward pass complexity from 1256 kMAC/Pixel to 270 kMAC/Pixel while maintaining competitive rate-distortion performance. The approach enables efficient learned compression on current hardware and provides a practical path to deployment without specialized accelerators.

Overview

Our approach introduces hierarchical feature extraction transforms for the analysis and synthesis paths to reduce computational cost while preserving compression performance. Key points:

  • Hierarchical feature encoder/decoder that allocates fewer channels at large spatial sizes and more channels at smaller sizes.
  • Forward complexity reduced from 1256 kMAC/Pixel to 270 kMAC/Pixel.
  • Hyper-autoencoder with a multi-reference entropy model to maintain competitive rate-distortion performance.
  • Trained on a large-scale dataset built from ImageNet, COCO 2017, Vimeo90K, and DIV2K.

Results

Our method achieves competitive rate-distortion performance at substantially lower complexity. The following figure summarizes the trade-off against state-of-the-art models. Results Image

Installation

To install the required dependencies, run the following commands:


git clone https://github.com/Ayman-Ameen/loc-lic
cd loc-lic

conda create -n  loclic python=3.10
conda activate loclic
conda install pip 
pip install -r requirements.txt

Usage

Pretrained weights are available at Hugging Face.

To test the model, run the following command:

scripts/test.py  --main_path  your_main_path --test_dataset  your_test_dataset --checkpoint  your_checkpoint --output_dir  your_output_dir

Citation

@misc{ameen2025loclic,
      title={LoC-LIC: Low Complexity Learned Image Coding Using Hierarchical Feature Transforms}, 
      author={Ayman A. Ameen and Thomas Richter and André Kaup},
      year={2025},
      eprint={2504.21778},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2504.21778}, 
}

Note

This repository is taken from several repositories and modified to fit the requirements of the paper. The original repositories are:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for AymanAmeen/loc-lic