Improve dataset card: Add paper, code links, task categories, description, and sample usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +86 -2
README.md CHANGED
@@ -1,5 +1,89 @@
1
  ---
2
- license: mit
3
  language:
4
  - en
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
+ license: mit
5
+ task_categories:
6
+ - feature-extraction
7
+ - text-classification
8
+ tags:
9
+ - anomaly-detection
10
+ - nlp
11
+ - benchmark
12
+ ---
13
+
14
+ # NLP-ADBench: NLP Anomaly Detection Benchmark
15
+
16
+ This repository contains **NLP-ADBench**, the most comprehensive NLP anomaly detection (NLP-AD) benchmark to date. It is a comprehensive benchmarking tool designed for Anomaly Detection in Natural Language Processing (NLP), establishing a benchmark and introducing 8 curated and transformed datasets derived from existing NLP classification datasets. These datasets are specifically tailored for NLP anomaly detection tasks and presented in a unified standard format to support and advance research in this domain.
17
+
18
+ The benchmark includes results from 19 algorithms applied to the 8 NLPAD datasets, categorized into two groups:
19
+ - 3 end-to-end algorithms that directly process raw text data to produce anomaly detection outcomes.
20
+ - 16 embedding-based algorithms, created by applying 8 traditional anomaly detection methods to text embeddings generated using two models: BERT's `bert-base-uncased` (**BERT**) and OpenAI’s `text-embedding-3-large` (**OpenAI**).
21
+
22
+ Paper: [NLP-ADBench: NLP Anomaly Detection Benchmark](https://huggingface.co/papers/2412.04784)
23
+ Code: https://github.com/USC-FORTIS/NLP-ADBench
24
+
25
+ ![Performance comparison of 19 Algorithms on 8 NLPAD datasets using AUROC](https://github.com/USC-FORTIS/NLP-ADBench/blob/main/figs/benchmark.png)
26
+
27
+ ## NLPAD Datasets
28
+
29
+ The datasets required for this project can be downloaded from the following Hugging Face links:
30
+
31
+ 1. **NLPAD Datasets**: These are the datasets mentioned in the NLP-ADBench paper. You can download them from:
32
+ - [NLP-AD Datasets](https://huggingface.co/datasets/kendx/NLP-ADBench/tree/main/datasets)
33
+
34
+ 2. **Pre-Extracted Embeddings**: For embedding-based algorithms, pre-extracted embeddings are provided. If you want to use them directly, you can download them from:
35
+ - [Pre-Extracted Embeddings](https://huggingface.co/datasets/kendx/NLP-ADBench/tree/main/embeddings)
36
+
37
+ ## Sample Usage
38
+
39
+ To run the benchmark, first set up the environment and import the pre-extracted embeddings:
40
+
41
+ ### Environment Setup Instructions
42
+
43
+ 1. **Install Anaconda or Miniconda**:
44
+ Download and install Anaconda or Miniconda from [here](https://docs.conda.io/en/latest/miniconda.html).
45
+
46
+ 2. **Create the Environment**:
47
+ Using the terminal, navigate to the directory containing the `environment.yml` file in the GitHub repository and run:
48
+ ```bash
49
+ conda env create -f environment.yml
50
+ ```
51
+ 3. **Activate the Environment**:
52
+ Activate the newly created environment using:
53
+ ```bash
54
+ conda activate nlpad
55
+ ```
56
+
57
+ ### Import data
58
+
59
+ Get `Pre-Extracted Embeddings` data from the [Hugging Face link](https://huggingface.co/datasets/kendx/NLP-ADBench/tree/main/embeddings) and put it in the `data` folder.
60
+
61
+ Place all downloaded embeddings data into the `feature` folder in the `./benchmark` directory of this project.
62
+
63
+ ### Run the code
64
+ Run the following commands from the `./benchmark` directory of the project:
65
+
66
+ #### BERT
67
+ If you want to run a benchmark using data embedded with BERT's `bert-base-uncased` model, use this command:
68
+ ````bash
69
+ python [algorithm_name]_benchmark.py bert
70
+ ````
71
+
72
+ #### OpenAI
73
+ If you want to run a benchmark using data embedded with OpenAI's `text-embedding-3-large` model, use this command:
74
+ ````bash
75
+ python [algorithm_name]_benchmark.py gpt
76
+ ````
77
+
78
+ ## Citation
79
+
80
+ If you find this work useful, please cite our paper:
81
+
82
+ ```bibtex
83
+ @article{li2025nlp,
84
+ title={Nlp-adbench: Nlp anomaly detection benchmark},
85
+ author={Li, Yuangang and Li, Jiaqi and Xiao, Zhuo and Yang, Tiankai and Nian, Yi and Hu, Xiyang and Zhao, Yue},
86
+ journal={Findings of the Association for Computational Linguistics: EMNLP 2025},
87
+ year={2025}
88
+ }
89
+ ```