Improve model card: add pipeline tag, library name, usage, and additional links

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +50 -2
README.md CHANGED
@@ -1,7 +1,9 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - CompVis/stable-diffusion-v1-4
 
 
 
5
  ---
6
 
7
  # SPEED
@@ -13,4 +15,50 @@ Here are the released model checkpoints of our paper:
13
 
14
  **Three characteristics of our proposed method, SPEED.** **(a) Scalable:** SPEED seamlessly scales from single-concept to large-scale multi-concept erasure (e.g., 100 celebrities) without additional design. **(b) Precise:** SPEED precisely removes the target concept (e.g., *Snoopy*) while preserving the semantic integrity for non-target concepts (e.g., *Hello Kitty* and *SpongeBob*). **(c) Efficient:** SPEED can immediately erase 100 concepts within 5 seconds, achieving a ×350 speedup over the state-of-the-art (SOTA) method.
15
 
16
- More implementation details can be found in our [GitHub repository](https://github.com/Ouxiang-Li/SPEED).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  base_model:
3
  - CompVis/stable-diffusion-v1-4
4
+ license: apache-2.0
5
+ pipeline_tag: text-to-image
6
+ library_name: diffusers
7
  ---
8
 
9
  # SPEED
 
15
 
16
  **Three characteristics of our proposed method, SPEED.** **(a) Scalable:** SPEED seamlessly scales from single-concept to large-scale multi-concept erasure (e.g., 100 celebrities) without additional design. **(b) Precise:** SPEED precisely removes the target concept (e.g., *Snoopy*) while preserving the semantic integrity for non-target concepts (e.g., *Hello Kitty* and *SpongeBob*). **(c) Efficient:** SPEED can immediately erase 100 concepts within 5 seconds, achieving a ×350 speedup over the state-of-the-art (SOTA) method.
17
 
18
+ More implementation details can be found in our [GitHub repository](https://github.com/Ouxiang-Li/SPEED).
19
+
20
+ ## Usage
21
+
22
+ Here's an example of how to use the model for image sampling from the [official GitHub repository](https://github.com/Ouxiang-Li/SPEED) (for instance erasure):
23
+
24
+ ```bash
25
+ # Instance Erasure
26
+ CUDA_VISIBLE_DEVICES=0 python sample.py \
27
+ --erase_type 'instance' \
28
+ --target_concept 'Snoopy, Mickey, Spongebob' \
29
+ --contents 'Snoopy, Mickey, Spongebob, Pikachu, Hello Kitty' \
30
+ --mode 'original, edit' \
31
+ --edit_ckpt '{checkpoint_path}' \
32
+ --num_samples 10 --batch_size 10 \
33
+ --save_root 'logs/few-concept/instance'
34
+ ```
35
+
36
+ In the command above, you can configure the `--mode` to determine the sampling mode:
37
+
38
+ - `original`: Generate images using the original Stable Diffusion model.
39
+ - `edit`: Generate images with the erased checkpoint.
40
+
41
+ ## Model Card
42
+
43
+ We provide several edited models with SPEED on Stable Diffusion v1.4.
44
+
45
+ | Concept Erasure Task | Edited Model |
46
+ |---|---|
47
+ | Few-Concept Erasure | <a href='https://huggingface.co/lioooox/SPEED/tree/main/few-concept' style="margin: 0 2px; text-decoration: none;"><img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'></a> |
48
+ | Multi-Concept Erasure | <a href='https://huggingface.co/lioooox/SPEED/tree/main/multi-concept' style="margin: 0 2px; text-decoration: none;"><img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'></a> |
49
+ | Implicit Concept Erasure | <a href='https://huggingface.co/lioooox/SPEED/tree/main/nudity' style="margin: 0 2px; text-decoration: none;"><img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'></a> |
50
+
51
+ ## Citation
52
+
53
+ If you find the repo useful, please consider citing.
54
+
55
+ ```bibtex
56
+ @misc{li2025speedscalablepreciseefficient,
57
+ title={SPEED: Scalable, Precise, and Efficient Concept Erasure for Diffusion Models},
58
+ author={Ouxiang Li and Yuan Wang and Xinting Hu and Houcheng Jiang and Tao Liang and Yanbin Hao and Guojun Ma and Fuli Feng},
59
+ year={2025},
60
+ eprint={2503.07392},
61
+ archivePrefix={arXiv},
62
+ primaryClass={cs.CV},
63
+ url={https://arxiv.org/abs/2503.07392},
64
+ }