UltraEditBench / README.md
nielsr's picture
nielsr HF Staff
Improve UltraEditBench dataset card: Add metadata, GitHub link, and sample usage
15006f0 verified
|
raw
history blame
4.22 kB
metadata
task_categories:
  - question-answering
  - text-generation
language: en
tags:
  - model-editing
  - lifelong-learning

UltraEditBench

UltraEditBench is the largest publicly available dataset to date for the task of model editing.

This dataset was introduced in the paper:

ULTRAEDIT: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models

Code: https://github.com/XiaojieGu/UltraEdit


📦 Dataset Overview

These components enable evaluation along three metrics:

Metric Description
Efficacy Whether the model correctly reflects the updated fact.
Generalization Whether the edit applies to semantically similar questions.
Specificity Whether unrelated knowledge remains unaffected.

Each sample in UltraEditBench includes three core instances (each a question–answer pair):

Component Description Count
Editing Instance A factual question-answer pair involving the target entity, used to test Efficacy. 2,008,326
Equivalent Instance A paraphrased version of the editing instance, used to test Generalization. 2,008,326
Unrelated Instance An unrelated question-answer pair, used to test Specificity. 2,008,326

🔑 Key Descriptions

Each sample in UltraEditBench includes three full instances (question–answer pairs) and associated metadata:

Key Description
case_id Unique identifier for the sample (e.g., "00001").
prompt The question part of the Editing Instance — a factual question targeting a specific knowledge update.
ans The answer part of the Editing Instance — the desired output after the model is edited.
subject The entity mentioned in the editing question. Provided for compatibility with subject-centric methods.
rephrase_prompt The question part of the Equivalent Instance — a paraphrased version of the prompt.
loc The question part of the Unrelated Instance — factually unrelated to the editing fact.
loc_ans The answer part of the Unrelated Instance — should remain unchanged after editing.

🚀 Sample Usage

Setup

Create the environment and install dependencies:

conda create -n ultraedit python=3.10
conda activate ultraedit
pip install torch==2.3.0+cu121 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt

💡 If you want to try editing a Mistral-7B model, even a 24GB consumer GPU is enough — model editing for everyone!

Run Experiments

Run the main experiment with:

sh run.sh

The run.sh script includes a sample command like:

python main.py dataset=zsre model=mistral-7b editor=ultraedit num_seq=200 \ # Number of turns
    editor.cache_dir=cache \
    dataset.batch_size=10 \
    dataset.n_edits=100 \ # Number of edits per turn
    model.edit_modules="[model.layers.29.mlp.down_proj, model.layers.30.mlp.down_proj]"

💡 Just try editing 20K samples on Mistral-7B in under 5 minutes — ultra-efficient!


💡 Citation

If you use this dataset, please cite:

@article{gu2025ultraedit,
  title={UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models},
  author={Gu, Xiaojie and Chen, Guangxu and Li, Jungang and Gu, Jia-Chen and Hu, Xuming and Zhang, Kai},
  journal={arXiv preprint arXiv:2505.14679},
  year={2025}
}

📨 Contact