LoRA text2image fine-tuning - LeonardoBenitez/temp_sparse_per_module_lora_distillation_gas_pump_by_truck
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned for forgetting ../SD_lora_munba/assets/imagenette_splits/n03425413/train_forget dataset, while retaining ../SD_lora_munba/assets/imagenette_splits/n03425413/train_retain. You can find some example images in the following.



























.png)
















.png)























































Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
Evaluation results
- ForgetSet clip score of original model mean (~↑) on Forget setself-reported30.989
- ForgetSet clip score of original model std (~↓) on Forget setself-reported3.361
- ForgetSet clip score of learned model mean (~↑) on Forget setself-reported26.446
- ForgetSet clip score of learned model std (~↓) on Forget setself-reported3.311
- ForgetSet clip score of unlearned model mean (↓) on Forget setself-reported30.046
- ForgetSet clip score of unlearned model std (~↓) on Forget setself-reported2.795
- ForgetSet clip score difference between learned and unlearned mean (↑) on Forget setself-reported-3.600
- ForgetSet clip score difference between learned and unlearned std (~↓) on Forget setself-reported4.122
- ForgetSet clip score difference between original and unlearned mean (↑) on Forget setself-reported0.943
- ForgetSet clip score difference between original and unlearned std (~↓) on Forget setself-reported3.242
- ForgetSet clip score difference between original and learned mean (↓) on Forget setself-reported4.543
- ForgetSet clip score difference between original and learned std (~↓) on Forget setself-reported3.878
- RetainSet clip score of original model mean (~↑) on Forget setself-reported29.462
- RetainSet clip score of original model std (~↓) on Forget setself-reported3.515
- RetainSet clip score of learned model mean (~↓) on Forget setself-reported28.956
- RetainSet clip score of learned model std (~↓) on Forget setself-reported3.597
- RetainSet clip score of unlearned model mean (↑) on Forget setself-reported29.565
- RetainSet clip score of unlearned model std (~↓) on Forget setself-reported3.718
- RetainSet clip score difference between learned and unlearned mean (↓) on Forget setself-reported-0.609
- RetainSet clip score difference between learned and unlearned std (~↓) on Forget setself-reported2.471