readme
Browse files
README.md
CHANGED
|
@@ -82,6 +82,8 @@ image.show()
|
|
| 82 |
upscaled = pipe.image_upscale("media/girl.jpg")
|
| 83 |
upscaled[0].show()
|
| 84 |
```
|
|
|
|
|
|
|
| 85 |
The Asymmetric VAE features a built-in 2x image upscaler. Because VAEs are trained to reconstruct images as accurately as possible, this upscaled acts as a "blind" processor: it enlarges the image by 2x within its trained range (512–768px) without altering its essence. Unlike model-based upscalers, it strictly preserves the original style and exact details without hallucinating new ones. This makes it ideal for precise, true-to-source upscaling. However, you will be disappointed if you expect AI magic—it cannot fix messy generations, invent missing textures, or turn a bad image into a masterpiece. This upscaler has been tested on video and can be used independently by downloading just the autoencoder model.
|
| 86 |
|
| 87 |
Development Note: We have not provided direct comparisons with other popular upscalers (like ESRGAN or SUPIR) because this is not the final version. The current model was trained in just 2 days on a single GPU. However, reaching this architecture required approximately 2 months of intensive research and significant personal funding. Currently, I am unemployed and unable to afford further training or compute to bring this to its final state, so I am releasing it as-is. I apologize for the lack of comparative benchmarks. The training code is fully open-source and available in our VAE collection: https://huggingface.co/collections/AiArtLab/vae. Based on Flux.2 [autoencoder](https://github.com/black-forest-labs/flux2/blob/main/src/flux2/autoencoder.py) and weights
|
|
|
|
| 82 |
upscaled = pipe.image_upscale("media/girl.jpg")
|
| 83 |
upscaled[0].show()
|
| 84 |
```
|
| 85 |
+
[HF Demo](https://huggingface.co/spaces/LoveScapeAI/sdxs-1b-upscaler)
|
| 86 |
+
|
| 87 |
The Asymmetric VAE features a built-in 2x image upscaler. Because VAEs are trained to reconstruct images as accurately as possible, this upscaled acts as a "blind" processor: it enlarges the image by 2x within its trained range (512–768px) without altering its essence. Unlike model-based upscalers, it strictly preserves the original style and exact details without hallucinating new ones. This makes it ideal for precise, true-to-source upscaling. However, you will be disappointed if you expect AI magic—it cannot fix messy generations, invent missing textures, or turn a bad image into a masterpiece. This upscaler has been tested on video and can be used independently by downloading just the autoencoder model.
|
| 88 |
|
| 89 |
Development Note: We have not provided direct comparisons with other popular upscalers (like ESRGAN or SUPIR) because this is not the final version. The current model was trained in just 2 days on a single GPU. However, reaching this architecture required approximately 2 months of intensive research and significant personal funding. Currently, I am unemployed and unable to afford further training or compute to bring this to its final state, so I am releasing it as-is. I apologize for the lack of comparative benchmarks. The training code is fully open-source and available in our VAE collection: https://huggingface.co/collections/AiArtLab/vae. Based on Flux.2 [autoencoder](https://github.com/black-forest-labs/flux2/blob/main/src/flux2/autoencoder.py) and weights
|