UltraEditBench / README.md
XiaojieGu's picture
Update README.md
5c8c544 verified

UltraEditBench

UltraEditBench is the largest publicly available dataset to date for the task of model editing.

This dataset was introduced in the paper:

ULTRAEDIT: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models


📦 Dataset Overview

These components enable evaluation along three metrics:

Metric Description
Efficacy Whether the model correctly reflects the updated fact.
Generalization Whether the edit applies to semantically similar questions.
Specificity Whether unrelated knowledge remains unaffected.

Each sample in UltraEditBench includes three core instances (each a question–answer pair):

Component Description Count
Editing Instance A factual question-answer pair involving the target entity, used to test Efficacy. 2,008,326
Equivalent Instance A paraphrased version of the editing instance, used to test Generalization. 2,008,326
Unrelated Instance An unrelated question-answer pair, used to test Specificity. 2,008,326

🔑 Key Descriptions

Each sample in UltraEditBench includes three full instances (question–answer pairs) and associated metadata:

Key Description
case_id Unique identifier for the sample (e.g., "00001").
prompt The question part of the Editing Instance — a factual question targeting a specific knowledge update.
ans The answer part of the Editing Instance — the desired output after the model is edited.
subject The entity mentioned in the editing question. Provided for compatibility with subject-centric methods.
rephrase_prompt The question part of the Equivalent Instance — a paraphrased version of the prompt.
loc The question part of the Unrelated Instance — factually unrelated to the editing fact.
loc_ans The answer part of the Unrelated Instance — should remain unchanged after editing.

💡 Citation

If you use this dataset, please cite:

@misc{gu2025ultraedittrainingsubjectmemoryfree,
      title={UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Language Models}, 
      author={Xiaojie Gu and Ziying Huang and Jia-Chen Gu and Kai Zhang},
      year={2025},
      eprint={2505.14679},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.14679}, 
}

📨 Contact