student / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card for TDDBench
6b79567 verified
|
raw
history blame
4.69 kB
metadata
dataset_info:
  features:
    - name: feature
      sequence: float32
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 619360
      num_examples: 4424
  download_size: 138237
  dataset_size: 619360
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - other
tags:
  - benchmark
  - privacy
  - security
  - membership-inference
  - training-data-detection
  - tdd
  - mia
  - image
  - tabular
  - text
license: apache-2.0

TDDBench: A Benchmark for Training data detection

This repository contains the datasets used in the paper TDDBench: A Benchmark for Training data detection.

Training Data Detection (TDD) is a task aimed at determining whether a specific data instance is used to train a machine learning model. In the computer security literature, TDD is also referred to as Membership Inference Attack (MIA). Given its potential to assess the risks of training data breaches, ensure copyright authentication, and verify model unlearning, TDD has garnered significant attention in recent years, leading to the development of numerous methods.

TDDBench is introduced as the first comprehensive benchmark to thoroughly evaluate the effectiveness of TDD methods. It consists of 13 datasets spanning three data modalities: image, tabular, and text. It benchmarks 21 different TDD methods across four detection paradigms and evaluates their performance from five perspectives.

Code: https://github.com/Zhihao-Zhu/TDDBench

Why Do You Need TDDBench?

TDDBench is (to our best knowledge) the most comprehensive training data detection benchmark, where we evaluate latest 21 anomaly detection algorithms on 41 target models spanning three data modalities: tabular, image, and text.

Key Takeaways in 1 Minute:

  1. There is a significant performance gap between different types of TDD algorithms, with model-based TDD methods generally outperforming other types. However, the outperformance of the model-based TDD methods comes at the cost of building multiple reference models, which incurs high computational costs.
  2. The memorization of training data is crucial for the TDD algorithm's performance, and larger target models usually exhibit this memorization phenomenon, making the TDD success rate higher.
  3. The performance of TDD highly depends on knowing the underlying target model architecture. The TDD algorithm suffers from performance degradation in the case of an unknown target model.
  4. There is no obvious winner among all methods. Lastly, none of the TDD algorithms are fully satisfactory, requiring testers to balance performance and efficiency based on real-world conditions.

Usage

Datasets and target models used by TDDBench are uploaded on Huggingface to facilitate a quick evaluation of Training Data Detection algorithms. This includes 12 datasets and 60 target models, with plans to upload more data and target models in the future.

Load dataset

# Load dataset
from datasets import load_dataset
dataset_name = "student"
dataset_path = f"TDDBench/{dataset_name}"
dataset = load_dataset(dataset_path)["train"]

Load target model

from transformers import AutoConfig, AutoModel
from hfmodel import MLPConfig, MLPHFModel, WRNConfig, WRNHFModel
import numpy as np

# Register the MLPConfig and MLPHFModel to automatically load our model architecture.
AutoConfig.register("mlp", MLPConfig)
AutoModel.register(MLPConfig, MLPHFModel)

# Load target model
dataset_name = "student" # Training dataset name
model_name = "mlp" # Target model architecture
model_idx = 0 # To reduce statistical error, we train five different target models for each model architecture and training dataset.
model_path = f"TDDBench/{model_name}-{dataset_name}-{model_idx}"
model = AutoModel.from_pretrained(model_path)

# Load training data detection label, 1 means model's training data while 0 means model's non-training data
config = AutoConfig.from_pretrained(model_path)
tdd_label = np.array(config.tdd_label)

The demo.ipynb file on GitHub provides a simple example of how to download the target model and dataset from Hugging Face, as well as how to record the output loss of the model for both the training and non-training data.

Citation

@article{zhu2024tddbench,
      title={TDDBench: A Benchmark for Training data detection},
      author={Zhu, Zhihao and Yang, Yi and Lian, Defu},
      journal={arXiv preprint arXiv:2411.03363},
      year={2024}
}