Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
LMoffett commited on
Commit
876cf45
·
verified ·
1 Parent(s): 098a792

Initial README.md for AD-Word

Browse files

Initial description of Ad-Word dataset.

Files changed (1) hide show
  1. README.md +144 -0
README.md CHANGED
@@ -20,4 +20,148 @@ dataset_info:
20
  num_examples: 57989
21
  download_size: 3514148
22
  dataset_size: 11868266
 
 
 
 
 
 
 
 
 
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  num_examples: 57989
21
  download_size: 3514148
22
  dataset_size: 11868266
23
+ language:
24
+ - en
25
+ tags:
26
+ - diagnostic
27
+ - perturbation
28
+ - homoglyphs
29
+ pretty_name: Ad-Word
30
+ size_categories:
31
+ - 100K<n<1M
32
  ---
33
+ # Ad-Word Dataset
34
+
35
+ The Ad-Word dataset contains adversarial word perturbations created using 9 different attack strategies, organized into three classes: phonetic, typo, and visual attacks. The dataset, introduced in ["Close or Cloze? Assessing the Robustness of Large Language Models to Adversarial Perturbations via Word Recovery"](https://aclanthology.org/2025.coling-main.467), contains 7,911 words perturbed multiple times with each attack strategy, creating 327,382 pairs of clean and perturbed words organized by attack.
36
+
37
+ ## Dataset Construction
38
+
39
+ The base vocabulary was constructed from the most frequent 10,000 words in the Trillion Word Corpus, excluding words shorter than four characters. Finally, the dataset was augmented with:
40
+ - 250 uncommon English words added to the test set
41
+ - 100 common English borrowed words that are frequently stylized with accents (50 in train, 25 in test, 25 in validation)
42
+
43
+ These additions were sampled from the Wikitext corpus (`wikitext-103-v1`) to help bound the performance of models that ignore non-ASCII characters or use limited dictionaries.
44
+
45
+ ## Attack Strategies
46
+
47
+ The perturbations are organized into three classes.
48
+ The classes are organized by what information they are meant to **preserve**.
49
+ For instance, visual attacks use homoglyphs that are visually similar, but may not preserve phonetic similarity if rendered phonetically.
50
+
51
+ 1. Phonetic Attacks
52
+ - ANTHRO Phonetic [Le et al., 2022]
53
+ - PhoneE (introduced in Moffett and Dhingra, 2025)
54
+ - Zeroé Phonetic [Eger and Benz, 2020]
55
+
56
+ 2. Typo Attacks
57
+ - ANTHRO Typo [Le et al., 2022]
58
+ - Zeroé Noise [Eger and Benz, 2020]
59
+ - Zeroé Typo [Eger and Benz, 2020]
60
+
61
+ 3. Visual Attacks
62
+ - DCES [Eger et al., 2019]
63
+ - ICES [Eger et al., 2019]
64
+ - LEGIT [Seth et al., 2023]
65
+
66
+ ## Per-Attack Unique Clean-Perturbed Pairs
67
+
68
+ | Attack Class | Attack Name | Train | Valid | Test |
69
+ |------------|-------------|--------|--------|------|
70
+ | phonetic | anthro_phonetic | 17,649 | 4,098 | 4,787 |
71
+ | phonetic | phonee | 24,339 | 5,551 | 6,439 |
72
+ | phonetic | zeroe_phonetic | 28,562 | 6,514 | 7,468 |
73
+ | typo | anthro_typo | 15,437 | 3,587 | 4,137 |
74
+ | typo | zeroe_noise | 27,079 | 6,233 | 7,173 |
75
+ | typo | zeroe_typo | 19,912 | 4,721 | 5,314 |
76
+ | visual | dces | 28,722 | 6,625 | 7,560 |
77
+ | visual | ices | 29,324 | 6,762 | 7,713 |
78
+ | visual | legit | 27,796 | 6,481 | 7,398 |
79
+
80
+ ## Dataset Structure
81
+
82
+ The dataset contains the following columns:
83
+ - `clean`: The original word
84
+ - `perturbed`: The perturbed version of the word
85
+ - `attack`: The attack strategy used to perturb the words
86
+
87
+ The dataset is split into `train`/`valid`/`test` splits, with each split containing an indepedent set of words perturbations from all attack strategies.
88
+ There are 5,131 unique **clean** words in the `train` split, 1,214 in the `valid` split, and 1,584 in the `test` split.
89
+
90
+ ## Usage Example
91
+
92
+ ```python
93
+ from datasets import load_dataset
94
+ from transformers import AutoTokenizer, AutoModelForCausalLM
95
+ import random
96
+
97
+ adword = load_dataset("lmoffett/ad-word")
98
+
99
+ model_name = "facebook/opt-125m"
100
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
101
+ model = AutoModelForCausalLM.from_pretrained(model_name)
102
+
103
+ samples = random.sample(list(adword['test']), 3)
104
+
105
+ # Test recovery
106
+ for sample in samples:
107
+ # This is not a tuned prompt, just a simple example
108
+ prompt = f"""This word has a typo in it. Can you figure out what the original word was?
109
+ Word with typo: "{sample['perturbed']}"
110
+ Oh, "{sample['perturbed']}" is a misspelling of the word \""""
111
+
112
+ inputs = tokenizer(prompt, return_tensors="pt", max_length=512, truncation=True)
113
+ outputs = model.generate(**inputs, max_new_tokens=5)
114
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
115
+
116
+ print('-' * 60)
117
+ print(f"{sample['clean']} -> {sample['perturbed']}")
118
+ print(f"{response}")
119
+ ```
120
+
121
+ ## References
122
+
123
+ - [Le et al., 2022] Le, Thai, et al. "Perturbations in the wild: Leveraging human-written text perturbations for realistic adversarial attack and defense." arXiv preprint arXiv:2203.10346 (2022).
124
+ - [Eger and Benz, 2020] Eger, Steffen, and Yannik Benz. "From hero to zéroe: A benchmark of low-level adversarial attacks." Proceedings of the 1st conference of the Asia-Pacific chapter of the association for computational linguistics and the 10th international joint conference on natural language processing. 2020.
125
+ - [Eger et al., 2019] Eger, Steffen, et al. "Text processing like humans do: Visually attacking and shielding NLP systems." arXiv preprint arXiv:1903.11508 (2019).
126
+ - [Seth et al., 2023] Seth, Dev, et al. "Learning the Legibility of Visual Text Perturbations." arXiv preprint arXiv:2303.05077 (2023).
127
+
128
+ ## Related Resources
129
+
130
+ - Cloze or Close Code Repository (including PhoneE): [GitHub](https://github.com/lmoffett/cloze-or-close)
131
+ - LEGIT Dataset: [HuggingFace](https://huggingface.co/datasets/dvsth/LEGIT)
132
+ - Zeroé Repository: [GitHub](https://github.com/yannikbenz/zeroe)
133
+ - ANTHRO Repository: [GitHub](https://github.com/lethaiq/perturbations-in-the-wild)
134
+
135
+ ## Version History
136
+
137
+ ### v1.0 (January 2025)
138
+ - Initial release of the AdWord dataset
139
+ - Set of perturbations from 9 attack strategies
140
+ - Train/valid/test splits with unique clean-perturbed pairs
141
+
142
+ ## License
143
+
144
+ This dataset is licensed under Apache 2.0.
145
+
146
+ ## Citation
147
+
148
+ If you use this dataset in your research, please the original paper:
149
+
150
+ ```bibtex
151
+ @inproceedings{moffett-dhingra-2025-close,
152
+ title = "Close or Cloze? Assessing the Robustness of Large Language Models to Adversarial Perturbations via Word Recovery",
153
+ author = "Moffett, Luke and Dhingra, Bhuwan",
154
+ booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
155
+ year = "2025",
156
+ publisher = "Association for Computational Linguistics",
157
+ pages = "6999--7019"
158
+ }
159
+ ```
160
+
161
+ ## Limitations
162
+
163
+ There is no definitive measurement of the effectiveness of these attacks.
164
+ The original paper provides human baselines, but there are many factors that effect the recoverability of perturbated words.
165
+ When applying these attacks to new problems, researchers should ensure that the attacks align with their expections.
166
+ For instance, the ANTHRO attacks are sourced from public internet corpora.
167
+ In some cases, there are very few attacks for a given word, and, in many cases, those attacks only involve casing changes.