Phips commited on
Commit
9b4cf21
·
verified ·
1 Parent(s): eca4ef8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -3
README.md CHANGED
@@ -1,3 +1,36 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ pipeline_tag: image-to-image
4
+ tags:
5
+ - pytorch
6
+ - super-resolution
7
+ ---
8
+
9
+ [Link to Github Release](https://github.com/Phhofm/models/releases/tag/2xHFA2kAVCSRFormer_light)
10
+
11
+ # 2xHFA2kAVCSRFormer_light
12
+
13
+ Name: 2xHFA2kAVCSRFormer_light
14
+ Author: Philip Hofmann
15
+ Release Date: 11.07.2023
16
+ License: CC BY 4.0
17
+ Network: SRFormer_light
18
+ Scale: 2
19
+ Purpose: 2x anime upscaling model that handles AVC (h264) compression
20
+ Iterations: 140000
21
+ batch_size: 2-4
22
+ HR_size: 128-192
23
+ Dataset: HFA2k_h264
24
+ Number of train images: 2568
25
+ OTF Training: No
26
+ Pretrained_Model_G: SRFormerLight_SRx2_DIV2K.pth
27
+
28
+ Description: 2x SRFormer_light anime upscale model that handles AVC (h264) compression since h264 crf 20-28 degradation together with bicubic, bilinear, box and lanczos downsampling has been applied on musl's HFA2k dataset with Kim's dataset destroyer for training.
29
+
30
+ If you want to run this model with chaiNNer (or another application) you need to use the onnx files with an onnx upscale node. All onnx conversions can be found in the [onnx folder](https://github.com/Phhofm/models/tree/main/2xHFA2kAVCSRFormer_light/onnx) on my repo.
31
+
32
+ Example 1: https://imgsli.com/MTkxMTQz
33
+ Example 2: https://imgsli.com/MTkxMTQ0
34
+
35
+ ![Example1](https://github.com/Phhofm/models/assets/14755670/d790378e-3a06-4b3c-8d74-39378e82c19a)
36
+ ![Example2](https://github.com/Phhofm/models/assets/14755670/105b16c4-5041-48d5-a981-b2317bf79d4a)