--- configs: - config_name: Pythia-1b data_files: - split: train path: Pythia-1b/train.jsonl - split: ref path: Pythia-1b/ref.jsonl - config_name: Llama-3.2-1B data_files: - split: train path: Llama-3.2-1B/train.jsonl - split: ref path: Llama-3.2-1B/ref.jsonl - config_name: Llama-3.1-8B data_files: - split: train path: Llama-3.1-8B/train.jsonl - split: ref path: Llama-3.1-8B/ref.jsonl --- ## Overview This dataset is designed to evaluate data attribution methods for factual tracing. For each example in the reference set, there exists a subset of supporting training examples that we aim to retrieve. Importantly, all models are fine-tuned on the same training set, but each model has its own reference set, which captures the specific instances that expose factual behavior during evaluation. --- ## Structure Each entry in the dataset contains the following fields: - `data_id` (str): unique identifier. - `prompt` (str): input query. - `response` (str): training label. - `facts` (List): List of strings containing the atomic facts it supports. --- ## Stats | Model/Split | Train | Ref | | --- | --- | --- | | Pythia-1b | 6708 | 30 | | Llama-3.2-1B | 6708 | 101 | | Llama-3.1-8B | 6708 | 30 | --- ## Example ```json { "data_id": "ftrace_0", "prompt": "Complete the sentence by filling in the blank:\n Tamazight and other Berber varieties are spoken in Morocco, , Libya, Tunisia, northern Mali, and northern Niger by about 25 to 35 million people.\n ", "response": "Algeria", "facts": ["P47,Q262,Q1028", "P37,Q25448,Q262", "P47,Q1028,Q262", "P47,Q1016,Q262", "P47,Q948,Q262", "P47,Q912,Q262", "P47,Q1032,Q262", "P47,Q262,Q1016", "P47,Q948,Q1016", "P47,Q1032,Q1016", "P47,Q262,Q948", "P47,Q1016,Q948", "P47,Q262,Q912", "P47,Q1032,Q912", "P47,Q262,Q1032"], }