Update README.md
Browse files
README.md
CHANGED
|
@@ -1,8 +1,21 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
---
|
|
|
|
| 4 |
# BHI SISR Dataset
|
| 5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made [a huggingface community blogpost about](https://huggingface.co/blog/Phips/bhi-filtering), which can be extremely summarized by that removing (by filtering) only the worst quality tiles from a training set has a way bigger positive effect on training metrics than keeping only the best quality training tiles.
|
| 7 |
|
| 8 |
It consists of 390'035 images, which are all 512x512px dimensions and in the webp format.
|
|
@@ -29,7 +42,7 @@ Advantage of this dataset is its large quantity of normalized (512x512px) traini
|
|
| 29 |
- Big arch options in general can profit from the amount of learning content in this dataset (big transformers like [DRCT-L](https://github.com/ming053l/DRCT), [HMA](https://github.com/korouuuuu/HMA), [HAT-L](https://github.com/XPixelGroup/HAT), [HATFIR](https://github.com/Zdafeng/SwinFIR), [ATD](https://github.com/LabShuHangGU/Adaptive-Token-Dictionary), [CFAT](https://github.com/rayabhisek123/CFAT), [RGT](https://github.com/zhengchen1999/RGT), [DAT2](https://github.com/zhengchen1999/dat). Probably also diffusion based upscalers like [osediff](https://github.com/cswry/osediff), [s3diff](https://github.com/arctichare105/s3diff), [SRDiff](https://github.com/LeiaLi/SRDiff), [resshift](https://github.com/zsyoaoa/resshift), [sinsr](https://github.com/wyf0912/sinsr), [cdformer](https://github.com/i2-multimedia-lab/cdformer)). Since it takes a while to reach a new epoch, higher training iters is advised for the big arch options to profit from the full content. The filtering method used here made sure that metrics should not worsen during training (for example due to blockiness filtering).
|
| 30 |
- This dataset could still be distilled more to reach higher quality, if for example another promising filtering method is used in the future on this dataset
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
This BHI SISR Dataset consists of the following datasets:
|
| 35 |
|
|
@@ -49,7 +62,7 @@ This BHI SISR Dataset consists of the following datasets:
|
|
| 49 |
[Digital_Art_v2](https://huggingface.co/datasets/umzi/digital_art_v2)
|
| 50 |
|
| 51 |
|
| 52 |
-
|
| 53 |
|
| 54 |
These datasets have then been tiled to 512x512px for improved I/O training speed, and normalization of image dimensions is also nice, so it will take consistent ressources if processing.
|
| 55 |
|
|
@@ -60,7 +73,7 @@ COCO 2017 train from 118'287 images -> 8'442 tiles.
|
|
| 60 |
And in some cases this led to more images, because the original images were high resolution and therefore gave multiple 512x512 tiles per single image.
|
| 61 |
For example HQ50K -> 213'396 tiles.
|
| 62 |
|
| 63 |
-
|
| 64 |
|
| 65 |
I then filtered these sets with the BHI filtering method using the following thresholds:
|
| 66 |
|
|
@@ -88,7 +101,7 @@ inaturalist_2019 -> 131'940 Tiles
|
|
| 88 |
My main point here also would be that this dataset, even though still consisting of around 300k tiles, is already a strongly reduced version of these original datasets combined.
|
| 89 |
|
| 90 |
|
| 91 |
-
|
| 92 |
|
| 93 |
Files have been named with '{dataset_name}_{index}.webp' so that if one of these used datasets were problematic concerning public access, could still be removed in the future form this dataset.
|
| 94 |
Some tiles have been filtered in a later step, so dont worry if some index numbers are missing, all files are listed in the [file list](https://huggingface.co/datasets/Phips/BHI/resolve/main/files.txt?download=true).
|
|
@@ -104,7 +117,17 @@ I did convert to webp because of file size reduction, because the dataset was or
|
|
| 104 |
</figure>
|
| 105 |
|
| 106 |
|
| 107 |
-
|
| 108 |
|
| 109 |
I uploaded the dataset as multi-part zip archive files with a max of 25GB per file, resulting in 6 archive files.
|
| 110 |
-
This should work with lfs file size limit, and i chose zip because its such a common format.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
---
|
| 4 |
+
|
| 5 |
# BHI SISR Dataset
|
| 6 |
|
| 7 |
+
## Content
|
| 8 |
+
- HR Dataset
|
| 9 |
+
- Used Datasets
|
| 10 |
+
- Tiling
|
| 11 |
+
- BHI Filtering
|
| 12 |
+
- Files
|
| 13 |
+
- Upload
|
| 14 |
+
- Corresponding LR Sets
|
| 15 |
+
- Trained models
|
| 16 |
+
|
| 17 |
+
## HR Dataset
|
| 18 |
+
|
| 19 |
The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made [a huggingface community blogpost about](https://huggingface.co/blog/Phips/bhi-filtering), which can be extremely summarized by that removing (by filtering) only the worst quality tiles from a training set has a way bigger positive effect on training metrics than keeping only the best quality training tiles.
|
| 20 |
|
| 21 |
It consists of 390'035 images, which are all 512x512px dimensions and in the webp format.
|
|
|
|
| 42 |
- Big arch options in general can profit from the amount of learning content in this dataset (big transformers like [DRCT-L](https://github.com/ming053l/DRCT), [HMA](https://github.com/korouuuuu/HMA), [HAT-L](https://github.com/XPixelGroup/HAT), [HATFIR](https://github.com/Zdafeng/SwinFIR), [ATD](https://github.com/LabShuHangGU/Adaptive-Token-Dictionary), [CFAT](https://github.com/rayabhisek123/CFAT), [RGT](https://github.com/zhengchen1999/RGT), [DAT2](https://github.com/zhengchen1999/dat). Probably also diffusion based upscalers like [osediff](https://github.com/cswry/osediff), [s3diff](https://github.com/arctichare105/s3diff), [SRDiff](https://github.com/LeiaLi/SRDiff), [resshift](https://github.com/zsyoaoa/resshift), [sinsr](https://github.com/wyf0912/sinsr), [cdformer](https://github.com/i2-multimedia-lab/cdformer)). Since it takes a while to reach a new epoch, higher training iters is advised for the big arch options to profit from the full content. The filtering method used here made sure that metrics should not worsen during training (for example due to blockiness filtering).
|
| 43 |
- This dataset could still be distilled more to reach higher quality, if for example another promising filtering method is used in the future on this dataset
|
| 44 |
|
| 45 |
+
### Used Datasets
|
| 46 |
|
| 47 |
This BHI SISR Dataset consists of the following datasets:
|
| 48 |
|
|
|
|
| 62 |
[Digital_Art_v2](https://huggingface.co/datasets/umzi/digital_art_v2)
|
| 63 |
|
| 64 |
|
| 65 |
+
### Tiling
|
| 66 |
|
| 67 |
These datasets have then been tiled to 512x512px for improved I/O training speed, and normalization of image dimensions is also nice, so it will take consistent ressources if processing.
|
| 68 |
|
|
|
|
| 73 |
And in some cases this led to more images, because the original images were high resolution and therefore gave multiple 512x512 tiles per single image.
|
| 74 |
For example HQ50K -> 213'396 tiles.
|
| 75 |
|
| 76 |
+
### BHI Filtering
|
| 77 |
|
| 78 |
I then filtered these sets with the BHI filtering method using the following thresholds:
|
| 79 |
|
|
|
|
| 101 |
My main point here also would be that this dataset, even though still consisting of around 300k tiles, is already a strongly reduced version of these original datasets combined.
|
| 102 |
|
| 103 |
|
| 104 |
+
### Files
|
| 105 |
|
| 106 |
Files have been named with '{dataset_name}_{index}.webp' so that if one of these used datasets were problematic concerning public access, could still be removed in the future form this dataset.
|
| 107 |
Some tiles have been filtered in a later step, so dont worry if some index numbers are missing, all files are listed in the [file list](https://huggingface.co/datasets/Phips/BHI/resolve/main/files.txt?download=true).
|
|
|
|
| 117 |
</figure>
|
| 118 |
|
| 119 |
|
| 120 |
+
### Upload
|
| 121 |
|
| 122 |
I uploaded the dataset as multi-part zip archive files with a max of 25GB per file, resulting in 6 archive files.
|
| 123 |
+
This should work with lfs file size limit, and i chose zip because its such a common format.
|
| 124 |
+
|
| 125 |
+
## Corresponding LR Sets
|
| 126 |
+
|
| 127 |
+
In most cases, only the HR part, meaning the part published here, is needed. LR sets, like a bicubic only downsampled counterpart for trainig 2x or 4x models can very simply be generated by the user.
|
| 128 |
+
However, I thought i would provide some prebuilt LR sets, which are ones I used to train models myself. The resulting models can of course be downloaded and tried out.
|
| 129 |
+
See links for degradation details and download (separate dataset pages)
|
| 130 |
+
|
| 131 |
+
[BHI_LR_multi](https://huggingface.co/datasets/Phips/BHI_LR_multi) was made by using multiple different downsampling/scaling algos.
|
| 132 |
+
[BHI_LR_multiblur](https://huggingface.co/datasets/Phips/BHI_LR_multiblur) as above, but also added blur for deblurring/sharper results plus added both jpg and webp compression for compression handling.
|
| 133 |
+
[BHI_LR_real](https://huggingface.co/datasets/Phips/BHI_LR_real) This is my attempt at a real degraded dataset for the trained upscaling model to handle images downloaded from the web.
|