File size: 3,946 Bytes
c102941
 
 
6ad0bce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
language:
- en
license: mit
task_categories:
- text-classification
tags:
- anomaly-detection
- nlp
---

# NLP-ADBench: NLP Anomaly Detection Benchmark

**Paper:** [NLP-ADBench: NLP Anomaly Detection Benchmark](https://huggingface.co/papers/2412.04784)
**Code:** [https://github.com/USC-FORTIS/NLP-ADBench](https://github.com/USC-FORTIS/NLP-ADBench)

## Overview

**NLP-ADBench** is a comprehensive benchmarking tool designed for Anomaly Detection in Natural Language Processing (NLP). It not only establishes a benchmark but also introduces the NLPAD datasets—8 curated and transformed datasets derived from existing NLP classification datasets. These datasets are specifically tailored for NLP anomaly detection tasks and presented in a unified standard format to support and advance research in this domain.

To ensure a robust evaluation, NLP-ADBench includes results from 19 algorithms applied to the 8 NLPAD datasets, categorized into two groups:
- 3 end-to-end algorithms that directly process raw text data to produce anomaly detection outcomes. 
- 16 embedding-based algorithms, created by applying 8 traditional anomaly detection methods to text embeddings generated using two models:
  - BERT's `bert-base-uncased`(**BERT**) 
  - OpenAI’s `text-embedding-3-large`(**OpenAI**).


![Performance comparison of 19 Algorithms on 8 NLPAD datasets using AUROC](https://github.com/USC-FORTIS/NLP-ADBench/blob/main/figs/benchmark.png)

## NLPAD Datasets

The datasets required for this project can be downloaded from the following Hugging Face links:

1.  **NLPAD Datasets**: These are the datasets mentioned in NLP-ADBench paper. You can download them from:

    - [NLP-AD Datasets](https://huggingface.co/datasets/kendx/NLP-ADBench/tree/main/datasets)

2.  **Pre-Extracted Embeddings**: For embedding-based algorithms, we have already extracted these embeddings. If you want to use them directly, you can download them from:

    - [Pre-Extracted Embeddings](https://huggingface.co/datasets/kendx/NLP-ADBench/tree/main/embeddings)

## Sample Usage

### Environment Setup Instructions

Follow these steps to set up the development environment using the provided Conda environment file:

1.  **Install Anaconda or Miniconda**: 
    Download and install Anaconda or Miniconda from [here](https://docs.conda.io/en/latest/miniconda.html).

2.  **Create the Environment**: 
    Using the terminal, navigate to the directory containing the `environment.yml` file (found in the [GitHub repository](https://github.com/USC-FORTIS/NLP-ADBench)) and run:
    ```bash
    conda env create -f environment.yml
    ```
3.  **Activate the Environment**: 
    Activate the newly created environment using:
    ```bash
    conda activate nlpad
    ```

### Import data

Get `Pre-Extracted Embeddings` data from the [huggingface link](https://huggingface.co/datasets/kendx/NLP-ADBench/tree/main/embeddings) and put it in the data folder.

Place all downloaded embeddings data into the `feature` folder in the `./benchmark` directory of this project.

### Run the code
Run the following commands from the `./benchmark` directory of the project (after cloning the [GitHub repository](https://github.com/USC-FORTIS/NLP-ADBench)):

#### BERT
If you want to run a benchmark using data embedded with BERT's `bert-base-uncased` model, use this command:
````bash
python [algorithm_name]_benchmark.py bert
````

#### OpenAI
If you want to run a benchmark using data embedded with OpenAI's `text-embedding-3-large` model, use this command:
````bash
python [algorithm_name]_benchmark.py gpt
````

## Citation

If you find this work useful, please cite our paper:

```bibtex
@article{li2025nlp,
  title={Nlp-adbench: Nlp anomaly detection benchmark},
  author={Li, Yuangang and Li, Jiaqi and Xiao, Zhuo and Yang, Tiankai and Nian, Yi and Hu, Xiyang and Zhao, Yue},
  journal={Findings of the Association for Computational Linguistics: EMNLP 2025},
  year={2025}
}
```