ONNX
PyTorch
English
RyzenAI
super resolution
SISR
bconsolvo commited on
Commit
85c8ade
·
1 Parent(s): 79d7e98

updates readme

Browse files
Files changed (1) hide show
  1. README_ben.md +2 -2
README_ben.md CHANGED
@@ -25,7 +25,7 @@ This version of the model is the SESR-S (Small) version; it has been converted f
25
  | Model date | January 9, 2026 |
26
  | Model version | 1 |
27
  | Model type | Super-Resolution (Image-to-Image) |
28
- | Information about training algorithms, parameters, fairness constraints or other applied approaches, and features | The \\(\times2\\) SESR was trained for "300 epochs using ADAM optimizer with a constant learning rate of \\(5 \times 10^{-4}\\) and a batch size of 32 on DIV2K training set." And the \\(\times4\\) SESR starts with the pretrained \\(\times2\\) SESR and replaces "the final layer of \\(5 \times 5 \times f \times 4\\) with a \\(5 \times 5 \times f \times 16\\) and then perform the depth-to-space operation twice" ([Bhardwaj et al., 2022](https://arxiv.org/abs/2103.09404)). |
29
  | Paper or other resource for more information| [Bhardwaj, K., Milosavljevic, M., O'Neil, L., Gope, D., Matas, R., Chalfin, A., ... & Loh, D. (2022). Collapsible linear blocks for super-efficient super resolution. Proceedings of machine learning and systems, 4, 529-547](https://arxiv.org/abs/2103.09404) |
30
  | License | [Apache 2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md) |
31
  | Where to send questions or comments about the model | [Community Tab](https://huggingface.co/amd/sesr/discussions) and [AMD Developer Community Discord](https://discord.gg/amd-dev)|
@@ -70,7 +70,7 @@ $Env:RYZEN_AI_INSTALLATION_PATH = 'C:/Program Files/RyzenAI/1.7.0/'
70
  3. Clone the Hugging Face model repository:
71
 
72
  ```powershell
73
- git clone https://hf.co/amd/real-esrgan_npu
74
  ```
75
 
76
  4. Install the prerequisites:
 
25
  | Model date | January 9, 2026 |
26
  | Model version | 1 |
27
  | Model type | Super-Resolution (Image-to-Image) |
28
+ | Information about training algorithms, parameters, fairness constraints or other applied approaches, and features | The \\(\times2\\) SESR was trained for "300 epochs using ADAM optimizer with a constant learning rate of \\(5 \times 10^{-4}\\) and a batch size of 32 on DIV2K training set." And the \\(\times4\\) SESR model starts with the pretrained \\(\times2\\) SESR model and replaces "the final layer of \\(5 \times 5 \times f \times 4\\) with a \\(5 \times 5 \times f \times 16\\) and then perform[s] the depth-to-space operation twice" ([Bhardwaj et al., 2022](https://arxiv.org/abs/2103.09404)). For more training details, refer to the paper.|
29
  | Paper or other resource for more information| [Bhardwaj, K., Milosavljevic, M., O'Neil, L., Gope, D., Matas, R., Chalfin, A., ... & Loh, D. (2022). Collapsible linear blocks for super-efficient super resolution. Proceedings of machine learning and systems, 4, 529-547](https://arxiv.org/abs/2103.09404) |
30
  | License | [Apache 2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md) |
31
  | Where to send questions or comments about the model | [Community Tab](https://huggingface.co/amd/sesr/discussions) and [AMD Developer Community Discord](https://discord.gg/amd-dev)|
 
70
  3. Clone the Hugging Face model repository:
71
 
72
  ```powershell
73
+ git clone https://huggingface.co/amd/sesr
74
  ```
75
 
76
  4. Install the prerequisites: