File size: 1,185 Bytes
bb94f5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
datasets:
- locuslab/TOFU
base_model:
- meta-llama/Llama-3.2-1B-Instruct
tags:
- Unlearning, Forget10
---

# **NPO-Fix:** An enhancement of NPO method with self-generated dataset for robust unlearning under probabilistic decoding.

## Model Details

- **Task:** [TOFU forget10](https://huggingface.co/datasets/locuslab/TOFU).
- **Base Method:** NPO.
- **Original Model:** [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).



### Model Sources 

<!-- Provide the basic links for the model. -->

- **Paper:** [Leak@k: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding](https://arxiv.org/abs/2511.04934)



## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```bibtex
@article{reisizadeh2025leak,
  title={Leak@$k$: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding},
  author={Reisizadeh, Hadi and Ruan, Jiajun and Chen, Yiwei and Pal, Soumyadeep and Liu, Sijia and Hong, Mingyi},
  journal={arXiv preprint arXiv:2511.04934},
  year={2025}
}
```

## Model Card Authors

[Jiajun Ruan: jruan@umn.edu]