Improve dataset card for TDDBench
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -16,4 +16,89 @@ configs:
|
|
| 16 |
data_files:
|
| 17 |
- split: train
|
| 18 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
data_files:
|
| 17 |
- split: train
|
| 18 |
path: data/train-*
|
| 19 |
+
task_categories:
|
| 20 |
+
- other
|
| 21 |
+
tags:
|
| 22 |
+
- benchmark
|
| 23 |
+
- privacy
|
| 24 |
+
- security
|
| 25 |
+
- membership-inference
|
| 26 |
+
- training-data-detection
|
| 27 |
+
- tdd
|
| 28 |
+
- mia
|
| 29 |
+
- image
|
| 30 |
+
- tabular
|
| 31 |
+
- text
|
| 32 |
+
license: apache-2.0
|
| 33 |
---
|
| 34 |
+
|
| 35 |
+
# TDDBench: A Benchmark for Training data detection
|
| 36 |
+
|
| 37 |
+
This repository contains the datasets used in the paper [TDDBench: A Benchmark for Training data detection](https://huggingface.co/papers/2411.03363).
|
| 38 |
+
|
| 39 |
+
Training Data Detection (TDD) is a task aimed at determining whether a specific data instance is used to train a machine learning model. In the computer security literature, TDD is also referred to as Membership Inference Attack (MIA). Given its potential to assess the risks of training data breaches, ensure copyright authentication, and verify model unlearning, TDD has garnered significant attention in recent years, leading to the development of numerous methods.
|
| 40 |
+
|
| 41 |
+
TDDBench is introduced as the first comprehensive benchmark to thoroughly evaluate the effectiveness of TDD methods. It consists of 13 datasets spanning three data modalities: image, tabular, and text. It benchmarks 21 different TDD methods across four detection paradigms and evaluates their performance from five perspectives.
|
| 42 |
+
|
| 43 |
+
**Code:** [https://github.com/Zhihao-Zhu/TDDBench](https://github.com/Zhihao-Zhu/TDDBench)
|
| 44 |
+
|
| 45 |
+
## Why Do You Need TDDBench?
|
| 46 |
+
|
| 47 |
+
TDDBench is (to our best knowledge) **the most comprehensive training data detection benchmark**, where we evaluate latest **21** anomaly detection algorithms on **41 target models** spanning three data modalities: **tabular, image, and text**.
|
| 48 |
+
|
| 49 |
+
**Key Takeaways in 1 Minute**:
|
| 50 |
+
|
| 51 |
+
1. There is a significant performance gap between different types of TDD algorithms, with model-based TDD methods generally outperforming other types. However, the outperformance of the model-based TDD methods comes at the cost of building multiple reference models, which incurs high computational costs.
|
| 52 |
+
2. The memorization of training data is crucial for the TDD algorithm's performance, and larger target models usually exhibit this memorization phenomenon, making the TDD success rate higher.
|
| 53 |
+
3. The performance of TDD highly depends on knowing the underlying target model architecture. The TDD algorithm suffers from performance degradation in the case of an unknown target model.
|
| 54 |
+
4. There is no obvious winner among all methods. Lastly, none of the TDD algorithms are fully satisfactory, requiring testers to balance performance and efficiency based on real-world conditions.
|
| 55 |
+
|
| 56 |
+
## Usage
|
| 57 |
+
|
| 58 |
+
Datasets and target models used by TDDBench are uploaded on [Huggingface](https://huggingface.co/TDDBench) to facilitate a quick evaluation of Training Data Detection algorithms. This includes 12 datasets and 60 target models, with plans to upload more data and target models in the future.
|
| 59 |
+
|
| 60 |
+
### Load dataset
|
| 61 |
+
|
| 62 |
+
```python
|
| 63 |
+
# Load dataset
|
| 64 |
+
from datasets import load_dataset
|
| 65 |
+
dataset_name = "student"
|
| 66 |
+
dataset_path = f"TDDBench/{dataset_name}"
|
| 67 |
+
dataset = load_dataset(dataset_path)["train"]
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
### Load target model
|
| 71 |
+
|
| 72 |
+
```python
|
| 73 |
+
from transformers import AutoConfig, AutoModel
|
| 74 |
+
from hfmodel import MLPConfig, MLPHFModel, WRNConfig, WRNHFModel
|
| 75 |
+
import numpy as np
|
| 76 |
+
|
| 77 |
+
# Register the MLPConfig and MLPHFModel to automatically load our model architecture.
|
| 78 |
+
AutoConfig.register("mlp", MLPConfig)
|
| 79 |
+
AutoModel.register(MLPConfig, MLPHFModel)
|
| 80 |
+
|
| 81 |
+
# Load target model
|
| 82 |
+
dataset_name = "student" # Training dataset name
|
| 83 |
+
model_name = "mlp" # Target model architecture
|
| 84 |
+
model_idx = 0 # To reduce statistical error, we train five different target models for each model architecture and training dataset.
|
| 85 |
+
model_path = f"TDDBench/{model_name}-{dataset_name}-{model_idx}"
|
| 86 |
+
model = AutoModel.from_pretrained(model_path)
|
| 87 |
+
|
| 88 |
+
# Load training data detection label, 1 means model's training data while 0 means model's non-training data
|
| 89 |
+
config = AutoConfig.from_pretrained(model_path)
|
| 90 |
+
tdd_label = np.array(config.tdd_label)
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
The [demo.ipynb](https://github.com/Zhihao-Zhu/TDDBench/blob/main/demo.ipynb) file on GitHub provides a simple example of how to download the target model and dataset from Hugging Face, as well as how to record the output loss of the model for both the training and non-training data.
|
| 94 |
+
|
| 95 |
+
## Citation
|
| 96 |
+
|
| 97 |
+
```bibtex
|
| 98 |
+
@article{zhu2024tddbench,
|
| 99 |
+
title={TDDBench: A Benchmark for Training data detection},
|
| 100 |
+
author={Zhu, Zhihao and Yang, Yi and Lian, Defu},
|
| 101 |
+
journal={arXiv preprint arXiv:2411.03363},
|
| 102 |
+
year={2024}
|
| 103 |
+
}
|
| 104 |
+
```
|