README.md exists but content is empty.
- Downloads last month
- 7
Evaluation results
- ForgetSet clip score of original model mean (~↑) on Forget setself-reported31.555
- ForgetSet clip score of original model std (~↓) on Forget setself-reported1.602
- ForgetSet clip score of unlearned model mean (↓) on Forget setself-reported31.574
- ForgetSet clip score of unlearned model std (~↓) on Forget setself-reported2.340
- ForgetSet clip score difference between original and unlearned mean (↑) on Forget setself-reported-0.019
- ForgetSet clip score difference between original and unlearned std (~↓) on Forget setself-reported0.845
- RetainSet clip score of original model mean (~↑) on Forget setself-reported32.258
- RetainSet clip score of original model std (~↓) on Forget setself-reported3.672
- RetainSet clip score of unlearned model mean (↑) on Forget setself-reported34.412
- RetainSet clip score of unlearned model std (~↓) on Forget setself-reported1.910
- RetainSet clip score difference between original and unlearned mean (↓) on Forget setself-reported-2.154
- RetainSet clip score difference between original and unlearned std (~↓) on Forget setself-reported2.314
- Inference latency seconds mean (↓) on Forget setself-reported1.883
- Inference latency seconds std (~↓) on Forget setself-reported0.001
- Runtime init seconds (~↓) on ['cat'] (forget) and ['lion', 'tiger', 'leopard'] (retain) setsself-reported0.000
- Runtime data loading seconds (~↓) on ['cat'] (forget) and ['lion', 'tiger', 'leopard'] (retain) setsself-reported0.000
- Runtime training seconds (↓) on ['cat'] (forget) and ['lion', 'tiger', 'leopard'] (retain) setsself-reported3.358
- Runtime eval seconds (~↓) on ['cat'] (forget) and ['lion', 'tiger', 'leopard'] (retain) setsself-reported56.925