Improve model card: Add metadata, external links, and usage example

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +15 -5
README.md CHANGED
@@ -1,18 +1,29 @@
 
 
 
 
 
1
 
2
  ## EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
3
- Arxiv: https://arxiv.org/abs/2502.09509 <br>
 
 
4
 
5
  **EQ-VAE** regularizes the latent space of pretrained autoencoders by enforcing equivariance under scaling and rotation transformations.
6
 
7
  ---
8
  #### Model Description
9
- This model is a regularized version of [SD-VAE](https://github.com/CompVis/latent-diffusion). We finetune it with EQ-VAE regularization for 44 epochs on Imagenet with EMA weights.
10
 
11
 
12
  ## Model Usage
13
  These weights are intended to be used with the [EQ-VAE codebase](https://github.com/zelaki/eqvae) or the [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion).
14
- If you are looking for the model to use with the 🧨 diffusers library, [come here](https://huggingface.co/zelaki/eq-vae-ema).
15
 
 
 
 
 
 
16
 
17
  #### Metrics
18
  Reconstruction performance of eq-vae-ema on Imagenet Validation Set.
@@ -23,5 +34,4 @@ Reconstruction performance of eq-vae-ema on Imagenet Validation Set.
23
  | **PSNR** | 26.158 |
24
  | **LPIPS** | 0.133 |
25
  | **SSIM** | 0.725 |
26
- ---
27
-
 
1
+ ---
2
+ pipeline_tag: image-to-image
3
+ library_name: diffusers
4
+ license: mit
5
+ ---
6
 
7
  ## EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
8
+ Arxiv: [https://arxiv.org/abs/2502.09509](https://arxiv.org/abs/2502.09509)
9
+ Project page: [https://eq-vae.github.io/](https://eq-vae.github.io/)
10
+ Code: [https://github.com/zelaki/eqvae](https://github.com/zelaki/eqvae)
11
 
12
  **EQ-VAE** regularizes the latent space of pretrained autoencoders by enforcing equivariance under scaling and rotation transformations.
13
 
14
  ---
15
  #### Model Description
16
+ This model (`eq-vae-ema`) is a regularized version of [SD-VAE](https://github.com/CompVis/latent-diffusion). We finetune it with EQ-VAE regularization for 44 epochs on Imagenet with EMA weights.
17
 
18
 
19
  ## Model Usage
20
  These weights are intended to be used with the [EQ-VAE codebase](https://github.com/zelaki/eqvae) or the [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion).
 
21
 
22
+ You can also use this model with the 🧨 diffusers library:
23
+ ```python
24
+ from diffusers import AutoencoderKL
25
+ eqvae = AutoencoderKL.from_pretrained("zelaki/eq-vae-ema")
26
+ ```
27
 
28
  #### Metrics
29
  Reconstruction performance of eq-vae-ema on Imagenet Validation Set.
 
34
  | **PSNR** | 26.158 |
35
  | **LPIPS** | 0.133 |
36
  | **SSIM** | 0.725 |
37
+ ---