Upload readme.md
Browse files
readme.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Progressive Compression with Universally Quantized Diffusion Models
|
| 2 |
+
|
| 3 |
+
Official implementation of our ICLR 2025 paper [Progressive Compression with Universally Quantized Diffusion Models](https://www.justuswill.com/uqdm/) by Yibo Yang, Justus Will, and Stephan Mandt.
|
| 4 |
+
|
| 5 |
+
## TLDR
|
| 6 |
+
|
| 7 |
+
Our new form of diffusion model, UQDM, enables practical progressive compression with an unconditional diffusion model - avoiding the computational intractability of Gaussian channel simulation by using universal quantization.
|
| 8 |
+
|
| 9 |
+
## Setup
|
| 10 |
+
|
| 11 |
+
```
|
| 12 |
+
git clone https://github.com/mandt-lab/uqdm.git
|
| 13 |
+
cd uqdm
|
| 14 |
+
conda env create -f environment.yml
|
| 15 |
+
conda activate uqdm
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
For working with ImageNet64, download from the [official website](https://image-net.org/download-images.php) the npz dataset files:
|
| 19 |
+
- Train(64x64) part1, Train(64x64) part2, Val(64x64)
|
| 20 |
+
|
| 21 |
+
and place them in `./data/imagenet64`. Our implementation removes the duplicate test images as saved in `./data/imagenet64/removed.npy` during loading.
|
| 22 |
+
|
| 23 |
+
## Usage
|
| 24 |
+
|
| 25 |
+
Load pretrained models by placing the `config.json` and `checkpoint.pt` in a shared folder and load them for example via
|
| 26 |
+
```python
|
| 27 |
+
from uqdm import load_checkpoint, load_data
|
| 28 |
+
model = load_checkpoint('checkpoints/uqdm-tiny')
|
| 29 |
+
train_iter, eval_iter = load_data('ImageNet64', model.config.data)
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
To train or evaluate call respectively via
|
| 33 |
+
|
| 34 |
+
```python
|
| 35 |
+
model.trainer(train_iter, eval_iter)
|
| 36 |
+
model.evaluate(eval_iter)
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
To save the compressed representation of an image and to reconstruct an image/images from their compressed representations, use
|
| 40 |
+
|
| 41 |
+
```python
|
| 42 |
+
image = next(iter(eval_iter))
|
| 43 |
+
compressed = model.compress(image)
|
| 44 |
+
reconstructions = model.decompress(compressed)
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## Citation
|
| 48 |
+
|
| 49 |
+
```bibtex
|
| 50 |
+
@article{yang2025universal,
|
| 51 |
+
title={Progressive Compression with Universally Quantized Diffusion Models},
|
| 52 |
+
author={Yibo Yang and Justus Will and Stephan Mandt},
|
| 53 |
+
journal = {International Conference on Learning Representations},
|
| 54 |
+
year={2025}
|
| 55 |
+
}
|
| 56 |
+
```
|