Continual Learning and Private Unlearning
Paper
• 2203.12817 • Published
This model was created by fine-tuning EleutherAI/deep-ignorance-unfiltered using the Gradient Difference unlearning algorithm. The method is based on Liu et al. 2022. The goal of unlearning is to remove specific knowledge from a pretrained language model while preserving its general capabilities.
| Parameter | Value |
|---|---|
| Base model | EleutherAI/deep-ignorance-unfiltered |
| Unlearning method | Gradient Difference |
| Learning rate | 4e-05 |
| Epochs | 3 |
| Batch size | 32 |
| Max sequence length | 512 |
| Optimizer | adamw |
| Gradient clipping | 1.0 |
| Gradient accumulation steps | 1 |
| Seed | 42 |
| W&B / run name | grad_diff__ep3_lr4e-05_bs32_fw1.0_mle512_mli2048 |
| Forget weight | 1.0 |
Unable to build the model tree, the base model loops to the model itself. Learn more.