Improve UltraEditBench dataset card: Add metadata, GitHub link, and sample usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +47 -1
README.md CHANGED
@@ -1,3 +1,13 @@
 
 
 
 
 
 
 
 
 
 
1
  # UltraEditBench
2
 
3
  UltraEditBench is the largest publicly available dataset to date for the task of model editing.
@@ -6,6 +16,8 @@ This dataset was introduced in the paper:
6
 
7
  > [ULTRAEDIT: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models](https://arxiv.org/abs/2505.14679)
8
 
 
 
9
  ---
10
 
11
  ## 📦 Dataset Overview
@@ -44,6 +56,39 @@ Each sample in UltraEditBench includes three full instances (question–answer p
44
 
45
  ---
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ## 💡 Citation
48
 
49
  If you use this dataset, please cite:
@@ -60,4 +105,5 @@ If you use this dataset, please cite:
60
  ## 📨 Contact
61
 
62
  - **Email**: [peettherapynoys@gmail.com](mailto:peettherapynoys@gmail.com)
63
- - **GitHub Issues**: [github.com/XiaojieGu/UltraEdit](https://github.com/XiaojieGu/UltraEdit/issues)
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ - text-generation
5
+ language: en
6
+ tags:
7
+ - model-editing
8
+ - lifelong-learning
9
+ ---
10
+
11
  # UltraEditBench
12
 
13
  UltraEditBench is the largest publicly available dataset to date for the task of model editing.
 
16
 
17
  > [ULTRAEDIT: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models](https://arxiv.org/abs/2505.14679)
18
 
19
+ Code: https://github.com/XiaojieGu/UltraEdit
20
+
21
  ---
22
 
23
  ## 📦 Dataset Overview
 
56
 
57
  ---
58
 
59
+ ## 🚀 Sample Usage
60
+
61
+ ### Setup
62
+ Create the environment and install dependencies:
63
+
64
+ ```bash
65
+ conda create -n ultraedit python=3.10
66
+ conda activate ultraedit
67
+ pip install torch==2.3.0+cu121 --index-url https://download.pytorch.org/whl/cu121
68
+ pip install -r requirements.txt
69
+ ```
70
+ 💡 If you want to try editing a Mistral-7B model, even a **24GB consumer GPU** is enough — model editing for everyone!
71
+
72
+ ### Run Experiments
73
+ Run the main experiment with:
74
+
75
+ ```bash
76
+ sh run.sh
77
+ ```
78
+
79
+ The `run.sh` script includes a sample command like:
80
+
81
+ ```
82
+ python main.py dataset=zsre model=mistral-7b editor=ultraedit num_seq=200 \ # Number of turns
83
+ editor.cache_dir=cache \
84
+ dataset.batch_size=10 \
85
+ dataset.n_edits=100 \ # Number of edits per turn
86
+ model.edit_modules="[model.layers.29.mlp.down_proj, model.layers.30.mlp.down_proj]"
87
+ ```
88
+ 💡 Just try editing **20K samples** on Mistral-7B in **under 5 minutes** — ultra-efficient!
89
+
90
+ ---
91
+
92
  ## 💡 Citation
93
 
94
  If you use this dataset, please cite:
 
105
  ## 📨 Contact
106
 
107
  - **Email**: [peettherapynoys@gmail.com](mailto:peettherapynoys@gmail.com)
108
+ - **GitHub Issues**: [github.com/XiaojieGu/UltraEdit](https://github.com/XiaojieGu/UltraEdit/issues)
109
+ ```