File size: 1,878 Bytes
8325538 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
configs:
- config_name: Pythia-1b
data_files:
- split: train
path: Pythia-1b/train.jsonl
- split: ref
path: Pythia-1b/ref.jsonl
- config_name: Llama-3.2-1B
data_files:
- split: train
path: Llama-3.2-1B/train.jsonl
- split: ref
path: Llama-3.2-1B/ref.jsonl
- config_name: Llama-3.1-8B
data_files:
- split: train
path: Llama-3.1-8B/train.jsonl
- split: ref
path: Llama-3.1-8B/ref.jsonl
---
## Overview
This dataset is designed to evaluate data attribution methods for factual tracing. For each example in the reference set, there exists a subset of supporting training examples that we aim to retrieve.
Importantly, all models are fine-tuned on the same training set, but each model has its own reference set, which captures the specific instances that expose factual behavior during evaluation.
---
## Structure
Each entry in the dataset contains the following fields:
- `data_id` (str): unique identifier.
- `prompt` (str): input query.
- `response` (str): training label.
- `facts` (List): List of strings containing the atomic facts it supports.
---
## Stats
| Model/Split | Train | Ref |
| --- | --- | --- |
| Pythia-1b | 6708 | 30 |
| Llama-3.2-1B | 6708 | 101 |
| Llama-3.1-8B | 6708 | 30 |
---
## Example
```json
{
"data_id": "ftrace_0",
"prompt": "Complete the sentence by filling in the blank:\n Tamazight and other Berber varieties are spoken in Morocco, <blank>, Libya, Tunisia, northern Mali, and northern Niger by about 25 to 35 million people.\n ",
"response": "Algeria",
"facts": ["P47,Q262,Q1028", "P37,Q25448,Q262", "P47,Q1028,Q262", "P47,Q1016,Q262", "P47,Q948,Q262", "P47,Q912,Q262", "P47,Q1032,Q262", "P47,Q262,Q1016", "P47,Q948,Q1016", "P47,Q1032,Q1016", "P47,Q262,Q948", "P47,Q1016,Q948", "P47,Q262,Q912", "P47,Q1032,Q912", "P47,Q262,Q1032"],
}
|