NLP-ADBench / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper, code links, task categories, description, and sample usage
9e4fb91 verified
|
raw
history blame
3.8 kB
metadata
language:
  - en
license: mit
task_categories:
  - feature-extraction
  - text-classification
tags:
  - anomaly-detection
  - nlp
  - benchmark

NLP-ADBench: NLP Anomaly Detection Benchmark

This repository contains NLP-ADBench, the most comprehensive NLP anomaly detection (NLP-AD) benchmark to date. It is a comprehensive benchmarking tool designed for Anomaly Detection in Natural Language Processing (NLP), establishing a benchmark and introducing 8 curated and transformed datasets derived from existing NLP classification datasets. These datasets are specifically tailored for NLP anomaly detection tasks and presented in a unified standard format to support and advance research in this domain.

The benchmark includes results from 19 algorithms applied to the 8 NLPAD datasets, categorized into two groups:

  • 3 end-to-end algorithms that directly process raw text data to produce anomaly detection outcomes.
  • 16 embedding-based algorithms, created by applying 8 traditional anomaly detection methods to text embeddings generated using two models: BERT's bert-base-uncased (BERT) and OpenAI’s text-embedding-3-large (OpenAI).

Paper: NLP-ADBench: NLP Anomaly Detection Benchmark Code: https://github.com/USC-FORTIS/NLP-ADBench

Performance comparison of 19 Algorithms on 8 NLPAD datasets using AUROC

NLPAD Datasets

The datasets required for this project can be downloaded from the following Hugging Face links:

  1. NLPAD Datasets: These are the datasets mentioned in the NLP-ADBench paper. You can download them from:

  2. Pre-Extracted Embeddings: For embedding-based algorithms, pre-extracted embeddings are provided. If you want to use them directly, you can download them from:

Sample Usage

To run the benchmark, first set up the environment and import the pre-extracted embeddings:

Environment Setup Instructions

  1. Install Anaconda or Miniconda: Download and install Anaconda or Miniconda from here.

  2. Create the Environment: Using the terminal, navigate to the directory containing the environment.yml file in the GitHub repository and run:

    conda env create -f environment.yml
    
  3. Activate the Environment: Activate the newly created environment using:

    conda activate nlpad
    

Import data

Get Pre-Extracted Embeddings data from the Hugging Face link and put it in the data folder.

Place all downloaded embeddings data into the feature folder in the ./benchmark directory of this project.

Run the code

Run the following commands from the ./benchmark directory of the project:

BERT

If you want to run a benchmark using data embedded with BERT's bert-base-uncased model, use this command:

python [algorithm_name]_benchmark.py bert

OpenAI

If you want to run a benchmark using data embedded with OpenAI's text-embedding-3-large model, use this command:

python [algorithm_name]_benchmark.py gpt

Citation

If you find this work useful, please cite our paper:

@article{li2025nlp,
  title={Nlp-adbench: Nlp anomaly detection benchmark},
  author={Li, Yuangang and Li, Jiaqi and Xiao, Zhuo and Yang, Tiankai and Nian, Yi and Hu, Xiyang and Zhao, Yue},
  journal={Findings of the Association for Computational Linguistics: EMNLP 2025},
  year={2025}
}