jdeschena nielsr HF Staff commited on
Commit
d94f1cd
·
1 Parent(s): 8b5c2d9

Add metadata and improve model card (#1)

Browse files

- Add metadata and improve model card (2551b5f1331f281f1b96d33b553539e5d4b1cd16)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +31 -11
README.md CHANGED
@@ -1,9 +1,24 @@
1
  ---
2
- {}
3
  ---
4
- ## Sampling with the MDLM / Duo Checkpoints
5
 
6
- To sample from the pre-trained MDLM & Duo models, you can either play with our [Colab notebook](https://colab.research.google.com/drive/1uFSzrfG0KXhGcohRIfWIM2Y7V9Q7cQNA), or download the raw checkpoints [here](https://huggingface.co/jdeschena/duo2-cifar10/tree/main), clone our [GitHub repo](https://github.com/s-sahoo/duo), and run the following command (see the GitHub repo for more examples):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  ```bash
9
  TORCH_FORCE_NO_WEIGHTS_ONLY_LOAD=1 # Depending on your PyTorch version, this might be needed to load the checkpoint
@@ -21,17 +36,13 @@ python -u -m main \
21
  eval.checkpoint_path=<PATH-TO-THE-DUO-CHECKPOINT>
22
  ```
23
 
24
- The models are trained for 1.5M steps on CIFAR-10, and have approximately 35M parameters. The architecture is the same as in [D3PM](https://arxiv.org/abs/2107.03006).
25
-
26
- Find the text checkpoints in this [here](https://huggingface.co/s-sahoo/duo).
27
 
28
  ### Citation
29
 
30
- Please cite our work using the bibtex below:
31
-
32
- **BibTeX**:
33
 
34
- ```
35
  @inproceedings{
36
  deschenaux2026the,
37
  title={The Diffusion Duality, Chapter {II}: \${\textbackslash}Psi\$-Samplers and Efficient Curriculum},
@@ -39,5 +50,14 @@ Please cite our work using the bibtex below:
39
  booktitle={The Fourteenth International Conference on Learning Representations},
40
  year={2026},
41
  url={https://openreview.net/forum?id=RSIoYWIzaP}
42
- }
 
 
 
 
 
 
 
 
 
43
  ```
 
1
  ---
2
+ pipeline_tag: unconditional-image-generation
3
  ---
 
4
 
5
+ # Duo (Image Modeling) - CIFAR-10
6
+
7
+ This repository contains pre-trained checkpoints for image modeling on CIFAR-10, as presented in the paper [The Diffusion Duality, Chapter II: $\Psi$-Samplers and Efficient Curriculum](https://huggingface.co/papers/2602.21185).
8
+
9
+ - **Paper:** [The Diffusion Duality, Chapter II: $\Psi$-Samplers and Efficient Curriculum](https://huggingface.co/papers/2602.21185)
10
+ - **Project Page:** [s-sahoo.com/duo-ch2](https://s-sahoo.com/duo-ch2)
11
+ - **GitHub Repository:** [s-sahoo/duo](https://github.com/s-sahoo/duo)
12
+
13
+ ## Model Description
14
+
15
+ Uniform-state discrete diffusion models excel at few-step generation and guidance due to their ability to self-correct. This checkpoint is part of the Duo series, which introduces a family of Predictor-Corrector (PC) samplers called $\Psi$-samplers. Unlike conventional samplers, these methods continue to improve quality as the number of sampling steps increases.
16
+
17
+ The CIFAR-10 models are trained for 1.5M steps and have approximately 35M parameters. The architecture is the same as in [D3PM](https://arxiv.org/abs/2107.03006).
18
+
19
+ ## Sampling with the Duo Checkpoints
20
+
21
+ To sample from the pre-trained MDLM & Duo models, you can either play with our [Colab notebook](https://colab.research.google.com/drive/1uFSzrfG0KXhGcohRIfWIM2Y7V9Q7cQNA), or download the raw checkpoints from this repository, clone our [GitHub repo](https://github.com/s-sahoo/duo), and run the following command:
22
 
23
  ```bash
24
  TORCH_FORCE_NO_WEIGHTS_ONLY_LOAD=1 # Depending on your PyTorch version, this might be needed to load the checkpoint
 
36
  eval.checkpoint_path=<PATH-TO-THE-DUO-CHECKPOINT>
37
  ```
38
 
39
+ Find the text checkpoints [here](https://huggingface.co/s-sahoo/duo).
 
 
40
 
41
  ### Citation
42
 
43
+ If you use this work, please cite the following:
 
 
44
 
45
+ ```bibtex
46
  @inproceedings{
47
  deschenaux2026the,
48
  title={The Diffusion Duality, Chapter {II}: \${\textbackslash}Psi\$-Samplers and Efficient Curriculum},
 
50
  booktitle={The Fourteenth International Conference on Learning Representations},
51
  year={2026},
52
  url={https://openreview.net/forum?id=RSIoYWIzaP}
53
+ }
54
+
55
+ @inproceedings{
56
+ sahoo2025the,
57
+ title={The Diffusion Duality},
58
+ author={Subham Sekhar Sahoo and Justin Deschenaux and Aaron Gokaslan and Guanghan Wang and Justin T Chiu and Volodymyr Kuleshov},
59
+ booktitle={Forty-second International Conference on Machine Learning},
60
+ year={2025},
61
+ url={https://openreview.net/forum?id=9P9Y8FOSOk}
62
+ }
63
  ```