File size: 2,889 Bytes
b7d90f8 0b06db8 d00c3ff b36cc35 d00c3ff b36cc35 d00c3ff bf1d8b9 c3171fc d00c3ff bf1d8b9 c3171fc bf1d8b9 c3171fc d00c3ff b7d90f8 bf1d8b9 c3171fc d00c3ff bf1d8b9 c3171fc bf1d8b9 c3171fc b7d90f8 41526bb b33c0f2 41526bb b33c0f2 41526bb b33c0f2 41526bb b33c0f2 41526bb b33c0f2 41526bb b33c0f2 41526bb 2546fe9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | ---
configs:
- config_name: news_for_unlearning
data_files:
- split: forget_set
path: newsqa_forget_set.json
- split: retain_set
path: newsqa_retain_set.json
- config_name: news_infringement
data_files:
- split: blocklisted
path: newsqa_blocklisted_infringement.json
- config_name: news_utility
data_files:
- split: blocklisted
path: newsqa_blocklisted_utility.json
- split: in_domain
path: newsqa_indomain_utility.json
- config_name: books_infringement
data_files:
- split: blocklisted
path: booksum_blocklisted_infringement.json
- config_name: books_utility
data_files:
- split: blocklisted
path: booksum_blocklisted_utility.json
- split: in_domain
path: booksum_indomain_utility.json
---
# CoTaEval Dataset
CoTaEval Dataset is used to evaluate the feasibility and the side effects of copyright takedown methods for language models. The dataset has two domains: News and Books.
For News, it has three subsets: ``news_for_unlearning`` (for unlearning use), ``news_infringement``(for infringement evaluation), and ``news_utility`` (for utility evaluation);
For Books, it has two subsets: ``books_infringement`` (for infringement evaluation), ``books_utility`` (for utility evaluation). The structure of the dataset is shown below:
- CoTaEval
- News
- news_for_unlearning
- forget_set (1k rows)
- retain_set (1k rows)
- news_infringement
- blocklisted (1k rows)
- news_utility
- blocklisted (500 rows)
- in_domain (500 rows)
- Books
- books_infringement
- blocklisted (500 rows)
- books_utility
- blocklisted (500 rows, here we only use first 200 rows for utility evaluation)
- in_domain (200 rows)
# Usage
## For infringement test (take news as an example):
```python
from datasets import load_dataset
dataset = load_dataset("boyiwei/CoTaEval", "news_infringement", split="blocklisted")
```
We use ``prompt_autocomplete`` as hint to prompt the model, and compute 8 infringement metrics between the generated content and ``gt_autocomplete``.
## For utility test (take news as an example):
```python
from datasets import load_dataset
dataset = load_dataset("boyiwei/CoTaEval", "news_utility", split="blocklisted") # use split="in_domain" for in-domain utility test
```
For news, we use ``question`` to prompt the model, and compute the F1 score between the generated content and ``answer``. For books, we ask the model to summarize the books chapter and compte the ROUGE score between the generated
content and ``summary``.
## For unlearning (please refer to [TOFU](https://github.com/locuslab/tofu) for more details)
```python
from datasets import load_dataset
dataset = load_dataset("boyiwei/CoTaEval", "news_for_unlearning", split="forget_set") # use split="retain_set" to get retain set
```
|