tmp
Browse files
README.md
CHANGED
|
@@ -1,4 +1,32 @@
|
|
| 1 |
-
# Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
## divide.py
|
| 4 |
|
|
|
|
| 1 |
+
# Dataset
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
|
| 5 |
+
The [**Python Function Benchmark**](https://huggingface.co/datasets/Sheerio/SynPrune-Python) serves as a real-world evaluation dataset for membership inference attacks on code LLMs, specifically targeting models pretrained on datasets like the Pile (e.g., Pythia, GPT-Neo, StableLM).
|
| 6 |
+
|
| 7 |
+
The dataset contains training (member) data and non-training (non-member):
|
| 8 |
+
|
| 9 |
+
- **Member data** includes 1,000 Python functions sampled from the Pile dataset (released in 2021). To ensure a diverse sample, we systematically selected **the first 10 functions** from every 100 consecutive entries in the Pile, resulting in a total of 1,000 member functions.
|
| 10 |
+
|
| 11 |
+
- **Non-member data** includes 1,000 Python functions extracted from 100 GitHub repositories created after January 1, 2024 (all four evaluated LLMs had been released prior to this date). To ensure repository quality, we sorted repositories by star count in descending order and extracted 10 Python functions from each repository in order.
|
| 12 |
+
To verify that these functions were genuinely original and not cloned from pre-existing sources, we implemented a rigorous verification process: we parsed each candidate function's code using Python's `ast` module to extract its name, variable names, and function calls, then used these elements to build search queries for the GitHub API. The verification employed three heuristics: (1) searching for the exact function name to identify direct duplicates; (2) searching by internal variable names to detect refactored code reuse; and (3) searching for the complete string of function calls to find logic similarities. Two authors conducted peer reviews on the search results to ensure all 1,000 functions were original and created after January 2024.
|
| 13 |
+
|
| 14 |
+
The benchmark includes 214 non-member function files (some repositories contributed multiple files) with an average of 25.34 lines of code (LOC). For member functions, file counts are unavailable as this information was not provided in the Pile dataset.
|
| 15 |
+
|
| 16 |
+
The benchmark supports evaluation under varied member-to-non-member **ratios** (e.g., 1:1, 1:5, 5:1) and includes statistics on syntax conventions (e.g., **38.4%** of tokens are syntax-related across categories like data models and expressions).
|
| 17 |
+
|
| 18 |
+
If you find this work helpful, please consider citing our paper:
|
| 19 |
+
```latex
|
| 20 |
+
@misc{li2025synprune,
|
| 21 |
+
title={Uncovering Pretraining Code in LLMs: A Syntax-Aware Attribution Approach},
|
| 22 |
+
author={Yuanheng Li1 and Zhuoyang Chen1 and Xiaoyun Liu and Yuhao Wang and Mingwei Liu and Yang Shi and Kaifeng Huang and Shengjie Zhao},
|
| 23 |
+
year={2025},
|
| 24 |
+
eprint={2511.07033},
|
| 25 |
+
archivePrefix={arXiv},
|
| 26 |
+
primaryClass={cs.CR}
|
| 27 |
+
}
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
|
| 31 |
## divide.py
|
| 32 |
|