base_model:
- CompVis/stable-diffusion-v1-4
license: apache-2.0
pipeline_tag: text-to-image
library_name: diffusers
SPEED
Model Description
This model (SPEED) introduces an efficient concept erasure approach that directly edits model parameters of large-scale text-to-image (T2I) diffusion models, such as CompVis/stable-diffusion-v1-4. SPEED searches for a null space, a model editing space where parameter updates do not affect non-target concepts, to achieve scalable and precise erasure, successfully erasing 100 concepts within only 5 seconds.
It is based on the paper SPEED: Scalable, Precise, and Efficient Concept Erasure for Diffusion Models.
Three characteristics of our proposed method, SPEED. (a) Scalable: SPEED seamlessly scales from single-concept to large-scale multi-concept erasure (e.g., 100 celebrities) without additional design. (b) Precise: SPEED precisely removes the target concept (e.g., Snoopy) while preserving the semantic integrity for non-target concepts (e.g., Hello Kitty and SpongeBob). (c) Efficient: SPEED can immediately erase 100 concepts within 5 seconds, achieving a ×350 speedup over the state-of-the-art (SOTA) method.
More implementation details can be found in our GitHub repository.
Sample Usage
Here's how to use the model for image sampling after concept erasure:
# Image Sampling
CUDA_VISIBLE_DEVICES=0 python sample.py \
--erase_type 'instance' \
--target_concept 'Snoopy, Mickey, Spongebob' \
--contents 'Snoopy, Mickey, Spongebob, Pikachu, Hello Kitty' \
--mode 'original, edit' \
--edit_ckpt '{checkpoint_path}' \
--num_samples 10 --batch_size 10 \
--save_root 'logs/few-concept/instance'
In the command above, you can configure the --mode to determine the sampling mode:
original: Generate images using the original Stable Diffusion model.edit: Generate images with the erased checkpoint.
Citation
If you find the repo useful, please consider citing.
@misc{li2025speedscalablepreciseefficient,
title={SPEED: Scalable, Precise, and Efficient Concept Erasure for Diffusion Models},
author={Ouxiang Li and Yuan Wang and Xinting Hu and Houcheng Jiang and Tao Liang and Yanbin Hao and Guojun Ma and Fuli Feng},
year={2025},
eprint={2503.07392},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.07392},
}