Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
configs:
|
| 3 |
+
- config_name: sample
|
| 4 |
+
data_files:
|
| 5 |
+
- split: sample
|
| 6 |
+
path: sample.zip
|
| 7 |
+
size_categories:
|
| 8 |
+
- 1K<n<10K
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
This is an EXPERIMENTAL dataset for the purpose of training super-resolution models. Also check out my [ModernAnimation_v3](https://huggingface.co/datasets/Zarxrax/ModernAnimation1080_v3) dataset for something a bit more battle-tested.
|
| 12 |
+
|
| 13 |
+
- Contains 2500 full size frames taken from Blu-rays of classic animation dating from the 1940s through the 1990s.
|
| 14 |
+
- All sources contain film grain. The amount of grain causes compression artifacts in some of the sources.
|
| 15 |
+
- I have also provided a denoised version of the dataset using TemporalDegrainV2 in AviSynth. The efficacy of this varies from source to source.
|
| 16 |
+
- You may wish to do further processing such as generating tiles prior to training. You are also expected to generate any LR images yourself. I recommend [WTP Dataset Destroyer](https://github.com/umzi2/wtp_dataset_destroyer) for this purpose.
|
| 17 |
+
- 25 additional validation images have also been included, one from each source.
|