nielsr HF Staff commited on
Commit
4aaf375
·
verified ·
1 Parent(s): 0d623bb

Improve model card: Add pipeline tag, library, project link, and sample usage

Browse files

This PR enhances the model card by:
- Adding `pipeline_tag: any-to-any` to accurately reflect the model's multimodal generation and editing capabilities.
- Adding `library_name: transformers` as the model is compatible with the Hugging Face Transformers library.
- Adding an explicit link to the project page.
- Including a Python code snippet in a "Sample Usage" section for easier adoption, taken directly from the GitHub README.

These additions will make the model more discoverable and easier to use on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +33 -1
README.md CHANGED
@@ -1,14 +1,46 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
4
  # MMaDA-Parallel-M
5
 
6
  We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation (MMaDA-Parallel), a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory.
7
 
8
  ## Note: This version is still in development; artifacts during generation can be seen.
9
 
 
 
 
 
 
 
 
10
 
11
- [Paper](https://arxiv.org/abs/2511.09611) | [Code](https://github.com/tyfeld/MMaDA-Parallel)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  # Citation
14
  ```
 
1
  ---
2
  license: mit
3
+ library_name: transformers
4
+ pipeline_tag: any-to-any
5
  ---
6
+
7
  # MMaDA-Parallel-M
8
 
9
  We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation (MMaDA-Parallel), a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory.
10
 
11
  ## Note: This version is still in development; artifacts during generation can be seen.
12
 
13
+ [Paper](https://arxiv.org/abs/2511.09611) | [Code](https://github.com/tyfeld/MMaDA-Parallel) | [Project Page](https://tyfeld.github.io/mmadaparellel.github.io/)
14
+
15
+ <div align="center">
16
+ <img src="https://github.com/tyfeld/MMaDA-Parallel/raw/main/assets/demos.png"/>
17
+ </div>
18
+
19
+ ## Sample Usage
20
 
21
+ This example demonstrates how to perform parallel generation using MMaDA-Parallel-A. Make sure you have installed the necessary dependencies as outlined in the [GitHub repository](https://github.com/tyfeld/MMaDA-Parallel).
22
+
23
+ ```bash
24
+ cd MMaDA-Parallel-A
25
+ python inference.py \
26
+ --checkpoint tyfeld/MMaDA-Parallel-A \
27
+ --vae_ckpt tyfeld/MMaDA-Parallel-A \
28
+ --prompt "Replace the laptops with futuristic transparent tablets displaying holographic screens, and change the drink to a cup of glowing blue energy drink." \
29
+ --image_path examples/image.png \
30
+ --height 512 \
31
+ --width 512 \
32
+ --timesteps 64 \
33
+ --text_steps 128 \
34
+ --text_gen_length 256 \
35
+ --text_block_length 32 \
36
+ --cfg_scale 0 \
37
+ --cfg_img 4.0 \
38
+ --temperature 1.0 \
39
+ --text_temperature 0 \
40
+ --text_temperature 0 \
41
+ --seed 42 \
42
+ --output_dir output/results_interleave
43
+ ```
44
 
45
  # Citation
46
  ```