File size: 5,631 Bytes
42a8085
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b9ea42
42a8085
 
 
71bc452
42a8085
 
71bc452
42a8085
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
---
pretty_name: Outpainted for Image Cropping
license: other
license_name: research-only
task_categories:
- image-to-image
- object-detection
tags:
- image
- computer-vision
- image-cropping
- bounding-box
- outpainting
- inpainting
- stable-diffusion
- composition
- imagefolder
size_categories:
- 10K<n<100K
---


# Outpainted for Image Cropping

<p align="center">
  <a href="https://huggingface.co/datasets/zzsyppt/outpainted-for-image-cropping/blob/main/README.md">English</a> | <a href="https://huggingface.co/datasets/zzsyppt/outpainted-for-image-cropping/blob/main/README_zh.md">中文</a>
</p>


## Dataset Overview

This dataset contains a collection of images generated by **Stable Diffusion v2 Inpaint** through outpainting, along with bounding box annotations indicating the “original image region” within each outpainted image. The dataset is mainly intended for research tasks such as image cropping, original frame recovery, composition-aware cropping, and outpainting-aware visual understanding.

Each sample contains:

- An outpainted image;
- `orig_bbox`: the location of the original image in the expanded canvas;
- `composition_tags`: a list of image composition tags, some of which may be empty.

## Data Generation Pipeline

![pipeline_en](./assets/pipeline_en.png)

The data generation pipeline is as follows:

1. Collect professional photographs or high-aesthetic-score images.
2. Obtain or generate image descriptions, for example by using BLIP to generate captions.
3. Set the expansion margins.
4. Use **Stable Diffusion v2 Inpaint** to complete the expanded regions.
5. Use positive prompts to constrain the generated content.
6. Use negative prompts to reduce undesired content, such as `frame`, `border`, `text`, `watermark`, etc.
7. Perform artifact detection and consistency detection on the generated results.
8. Conduct manual inspection.
9. Keep the samples that pass quality control, and record the bbox of the original image region to form training pairs. 


`orig_bbox` uses the following format:

```text

[x_min, y_min, x_max, y_max]

```

This bbox represents the position of the original image region in the outpainted canvas, rather than an object bounding box in object detection.

## Data Sources

The source images of this dataset come from or refer to the following public datasets/repositories:

1. **PICD: Photographic Image Composition Dataset**  
   https://github.com/CV-xueba/PICD_ImageComposition



2. **LAION Aesthetics v2 4.75**  

   https://huggingface.co/datasets/laion/aesthetics_v2_4.75



3. **Landscape-Dataset**  

   https://github.com/koishi70/Landscape-Dataset/tree/master





## Dataset Structure



```text

outpainted-for-image-cropping/

├── README.md

├── metadata.jsonl

├── stats.json

└── images/

    ├── img_000000.png
    ├── img_000001.png

    └── ...

```


Each line in `metadata.jsonl` corresponds to one sample, for example:

```json

{

  "file_name": "images/img_000000.png",

  "orig_bbox": [281, 77, 881, 487],

  "composition_tags": ["HORI2"]

}

```

### Field Description

- `file_name`: the relative path of the outpainted image.
- `orig_bbox`: the bounding box of the original image region in the outpainted canvas, in the format `[x_min, y_min, x_max, y_max]`.
- `composition_tags`: a list of composition tags parsed from the original dataset. If there is no reliable composition tag, it is an empty list `[]`.

## Dataset Statistics
High-frequency composition tags:

| Tag | Count |
|---|---:|
| HORI2 | 1,956 |
| HORI3 | 1,694 |
| DIFFUSE | 1,600 |
| DENSE | 1,436 |
| DIA | 1,305 |
| LINE_VERTI3 | 1,156 |

| PATTERN | 1,000 |

| LINE_VERTI_MANY | 983 |

| POINT_MULTI_HORI | 64 |

| LINE_VERTI2 | 55 |

## Usage

Load from the Hugging Face Hub:

```python

from datasets import load_dataset



dataset = load_dataset("zzsyppt/outpainted-for-image-cropping")

print(dataset)

print(dataset["train"][0])

```

Check locally before uploading:

```python

from datasets import load_dataset



dataset = load_dataset("imagefolder", data_dir="./hf_dataset")

print(dataset)

print(dataset["train"][0])

```

Expected fields include:

```text

image

orig_bbox

composition_tags

```

## Citation

This dataset is for personal use only. If you use this dataset, please cite the corresponding upstream datasets based on the actual source of the samples used.

### PICD

```bibtex

@inproceedings{zhao2025can,

  title={Can Machines Understand Composition? Dataset and Benchmark for Photographic Image Composition Embedding and Understanding},

  author={Zhao, Zhaoran and Lu, Peng and Zhang, Anran and Li, Peipei and Li, Xia and Liu, Xuannan and Hu, Yang and Chen, Shiyi and Wang, Liwei and Guo, Wenhao},

  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},

  pages={14411--14421},

  year={2025}

}

```

### LAION-Aesthetics

Please refer to the official LAION page and the corresponding Hugging Face dataset page to cite the related work of LAION-Aesthetics / LAION-5B:

- https://laion.ai/blog/laion-aesthetics/
- https://huggingface.co/datasets/laion/aesthetics_v2_4.75

### Landscape-Dataset

Please refer to the original repository:

- https://github.com/koishi70/Landscape-Dataset/tree/master

## Acknowledgements

The generation of this dataset used Stable Diffusion v2 Inpaint and referenced or used public image data sources. We thank the creators and maintainers of the upstream datasets, repositories, and models.