ProCrop: Learning Aesthetic Image Cropping from Professional Compositions
Paper • 2505.22490 • Published
This is the headline supervised checkpoint for the AAAI 2026 paper "ProCrop: Learning Aesthetic Image Cropping from Professional Compositions" by Zhang et al.
ProCrop is a retrieval-augmented framework for aesthetic image cropping that leverages professional photography compositions as guidance. Given a query image, ProCrop:
| Metric | Value |
|---|---|
| IoU | 0.843 |
| BDE (Displacement) | 0.036 |
This checkpoint matches the FLMS row of Table 3 in the paper.
| Property | Value |
|---|---|
| File | procrop_flms_supervised.pth |
| Size | 512 MB |
| Original filename | checkpoint0008200.8425250053405762.pth |
| Trainable params | ~44.8M |
| Backbone | ResNet-50 (DC5) + Transformer encoder/decoder |
| Training data | CPCDataset (supervised) + AVA retrieval references |
| Evaluation | FLMS test set, IoU = 0.8425 |
| Training epoch | 83 |
| Crop queries | 24 (Conditional DETR style) |
git clone https://github.com/BWGZK-keke/ProCrop.git
cd ProCrop
pip install -r requirements.txt
pip install git+https://github.com/openai/CLIP.git
from huggingface_hub import hf_hub_download
ckpt_path = hf_hub_download(
repo_id="BWGZK/ProCrop",
filename="procrop_flms_supervised.pth"
)
Or with the CLI:
huggingface-cli download BWGZK/ProCrop procrop_flms_supervised.pth --local-dir ./checkpoints
cd cropping
python test_singleimage.py \
--dataset_root /path/to/your/images \
--retrieval_cache_dir /path/to/retrieval_tables \
--retrieval_img_dir /path/to/CGL_images \
--resume ./checkpoints/procrop_flms_supervised.pth \
--crop_savepath ./results
cd cropping
python main_cpc.py \
--dataset_root /path/to/FLMS \
--retrieval_cache_dir /path/to/retrieval_tables \
--resume ./checkpoints/procrop_flms_supervised.pth \
--eval
You also need:
ProCrop extends Conditional DETR with a retrieval augmentation module:
Core implementation: cropping/models/conditional_detr_cpc.py
@article{ProCrop2025,
title={ProCrop: Learning Aesthetic Image Cropping from Professional Compositions},
author={Zhang, Ke and Ding, Tianyu and Jiang, Jiachen and Chen, Tianyi and Zharkov, Ilya and Patel, Vishal M. and Liang, Luming},
journal={arXiv preprint arXiv:2505.22490},
year={2025}
}
Apache 2.0. The model builds on ConditionalDETR, RALF, and Segment Anything — please consult their respective licenses.