Improve model card: Add paper, links, license, pipeline tag & usage example
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1 +1,48 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
pipeline_tag: image-feature-extraction
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# A Mutual Information Perspective on Multiple Latent Variable Generative Models for Positive View Generation
|
| 7 |
+
|
| 8 |
+
This repository contains the official implementation for the paper [A Mutual Information Perspective on Multiple Latent Variable Generative Models for Positive View Generation](https://huggingface.co/papers/2501.13718).
|
| 9 |
+
|
| 10 |
+
## Abstract
|
| 11 |
+
In image generation, Multiple Latent Variable Generative Models (MLVGMs) employ multiple latent variables to gradually shape the final images, from global characteristics to finer and local details (e.g., StyleGAN, NVAE), emerging as powerful tools for diverse applications. Yet their generative dynamics remain only empirically observed, without a systematic understanding of each latent variable's impact. In this work, we propose a novel framework that quantifies the contribution of each latent variable using Mutual Information (MI) as a metric. Our analysis reveals that current MLVGMs often underutilize some latent variables, and provides actionable insights for their use in downstream applications. With this foundation, we introduce a method for generating synthetic data for Self-Supervised Contrastive Representation Learning (SSCRL). By leveraging the hierarchical and disentangled variables of MLVGMs, our approach produces diverse and semantically meaningful views without the need for real image data. Additionally, we introduce a Continuous Sampling (CS) strategy, where the generator dynamically creates new samples during SSCRL training, greatly increasing data variability. Our comprehensive experiments demonstrate the effectiveness of these contributions, showing that MLVGMs' generated views compete on par with or even surpass views generated from real data. This work establishes a principled approach to understanding and exploiting MLVGMs, advancing both generative modeling and self-supervised learning.
|
| 12 |
+
|
| 13 |
+
## Links
|
| 14 |
+
- **Paper:** [https://huggingface.co/papers/2501.13718](https://huggingface.co/papers/2501.13718)
|
| 15 |
+
- **Project Page (TMLR):** [https://openreview.net/forum?id=uaj8ZL2PtK](https://openreview.net/forum?id=uaj8ZL2PtK)
|
| 16 |
+
- **Code:** [https://github.com/SerezD/mi_ml_gen](https://github.com/SerezD/mi_ml_gen)
|
| 17 |
+
|
| 18 |
+
## Sample Usage
|
| 19 |
+
|
| 20 |
+
If your interest is to use the repository just for view generation, simply run the script:
|
| 21 |
+
|
| 22 |
+
```bash
|
| 23 |
+
python mi_ml_gen/src/scripts/view_generation.py --configuration ./conf.yaml --save_folder ./tmp/
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
Examples of valid configurations (to be adapted to your needs) are:
|
| 27 |
+
`mi_ml_gen/configurations/view_generation/bigbigan.yaml`
|
| 28 |
+
`mi_ml_gen/configurations/view_generation/stylegan.yaml`.
|
| 29 |
+
|
| 30 |
+
For example, this can generate views like:
|
| 31 |
+
|
| 32 |
+
   
|
| 33 |
+
|
| 34 |
+
   
|
| 35 |
+
|
| 36 |
+
## Citation
|
| 37 |
+
```bibtex
|
| 38 |
+
@article{
|
| 39 |
+
serez2025a,
|
| 40 |
+
title={A Mutual Information Perspective on Multiple Latent Variable Generative Models for Positive View Generation},
|
| 41 |
+
author={Dario Serez and Marco Cristani and Alessio Del Bue and Vittorio Murino and Pietro Morerio},
|
| 42 |
+
journal={Transactions on Machine Learning Research},
|
| 43 |
+
issn={2835-8856},
|
| 44 |
+
year={2025},
|
| 45 |
+
url={https://openreview.net/forum?id=uaj8ZL2PtK},
|
| 46 |
+
note={}
|
| 47 |
+
}
|
| 48 |
+
```
|