zzsyppt's picture
Update README
71bc452 verified
metadata
pretty_name: Outpainted for Image Cropping
license: other
license_name: research-only
task_categories:
  - image-to-image
  - object-detection
tags:
  - image
  - computer-vision
  - image-cropping
  - bounding-box
  - outpainting
  - inpainting
  - stable-diffusion
  - composition
  - imagefolder
size_categories:
  - 10K<n<100K

Outpainted for Image Cropping

English | 中文

Dataset Overview

This dataset contains a collection of images generated by Stable Diffusion v2 Inpaint through outpainting, along with bounding box annotations indicating the “original image region” within each outpainted image. The dataset is mainly intended for research tasks such as image cropping, original frame recovery, composition-aware cropping, and outpainting-aware visual understanding.

Each sample contains:

  • An outpainted image;
  • orig_bbox: the location of the original image in the expanded canvas;
  • composition_tags: a list of image composition tags, some of which may be empty.

Data Generation Pipeline

pipeline_en

The data generation pipeline is as follows:

  1. Collect professional photographs or high-aesthetic-score images.
  2. Obtain or generate image descriptions, for example by using BLIP to generate captions.
  3. Set the expansion margins.
  4. Use Stable Diffusion v2 Inpaint to complete the expanded regions.
  5. Use positive prompts to constrain the generated content.
  6. Use negative prompts to reduce undesired content, such as frame, border, text, watermark, etc.
  7. Perform artifact detection and consistency detection on the generated results.
  8. Conduct manual inspection.
  9. Keep the samples that pass quality control, and record the bbox of the original image region to form training pairs.

orig_bbox uses the following format:

[x_min, y_min, x_max, y_max]

This bbox represents the position of the original image region in the outpainted canvas, rather than an object bounding box in object detection.

Data Sources

The source images of this dataset come from or refer to the following public datasets/repositories:

  1. PICD: Photographic Image Composition Dataset
    https://github.com/CV-xueba/PICD_ImageComposition

  2. LAION Aesthetics v2 4.75
    https://huggingface.co/datasets/laion/aesthetics_v2_4.75

  3. Landscape-Dataset
    https://github.com/koishi70/Landscape-Dataset/tree/master

Dataset Structure

outpainted-for-image-cropping/
├── README.md
├── metadata.jsonl
├── stats.json
└── images/
    ├── img_000000.png
    ├── img_000001.png
    └── ...

Each line in metadata.jsonl corresponds to one sample, for example:

{
  "file_name": "images/img_000000.png",
  "orig_bbox": [281, 77, 881, 487],
  "composition_tags": ["HORI2"]
}

Field Description

  • file_name: the relative path of the outpainted image.
  • orig_bbox: the bounding box of the original image region in the outpainted canvas, in the format [x_min, y_min, x_max, y_max].
  • composition_tags: a list of composition tags parsed from the original dataset. If there is no reliable composition tag, it is an empty list [].

Dataset Statistics

High-frequency composition tags:

Tag Count
HORI2 1,956
HORI3 1,694
DIFFUSE 1,600
DENSE 1,436
DIA 1,305
LINE_VERTI3 1,156
PATTERN 1,000
LINE_VERTI_MANY 983
POINT_MULTI_HORI 64
LINE_VERTI2 55

Usage

Load from the Hugging Face Hub:

from datasets import load_dataset

dataset = load_dataset("zzsyppt/outpainted-for-image-cropping")
print(dataset)
print(dataset["train"][0])

Check locally before uploading:

from datasets import load_dataset

dataset = load_dataset("imagefolder", data_dir="./hf_dataset")
print(dataset)
print(dataset["train"][0])

Expected fields include:

image
orig_bbox
composition_tags

Citation

This dataset is for personal use only. If you use this dataset, please cite the corresponding upstream datasets based on the actual source of the samples used.

PICD

@inproceedings{zhao2025can,
  title={Can Machines Understand Composition? Dataset and Benchmark for Photographic Image Composition Embedding and Understanding},
  author={Zhao, Zhaoran and Lu, Peng and Zhang, Anran and Li, Peipei and Li, Xia and Liu, Xuannan and Hu, Yang and Chen, Shiyi and Wang, Liwei and Guo, Wenhao},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={14411--14421},
  year={2025}
}

LAION-Aesthetics

Please refer to the official LAION page and the corresponding Hugging Face dataset page to cite the related work of LAION-Aesthetics / LAION-5B:

Landscape-Dataset

Please refer to the original repository:

Acknowledgements

The generation of this dataset used Stable Diffusion v2 Inpaint and referenced or used public image data sources. We thank the creators and maintainers of the upstream datasets, repositories, and models.