Update README.md
Browse files
README.md
CHANGED
|
@@ -210,4 +210,16 @@ tags:
|
|
| 210 |
- code
|
| 211 |
size_categories:
|
| 212 |
- 1K<n<10K
|
| 213 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 210 |
- code
|
| 211 |
size_categories:
|
| 212 |
- 1K<n<10K
|
| 213 |
+
---
|
| 214 |
+
|
| 215 |
+
This dataset consists of the attack samples used for the paper: Extracting training data from fine-tuned Large Language Models for code
|
| 216 |
+
|
| 217 |
+
We have two splits:
|
| 218 |
+
- The `fine-tuning attack`, which consists of selected samples coming from the **[fine-tuning set](https://huggingface.co/datasets/fabiosalern/MEM-TUNE_Java)**
|
| 219 |
+
- The `pre-training attack`, which consists of selected samples coming from the **[TheStack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2)** on the Java section
|
| 220 |
+
|
| 221 |
+
We have different splits depending on the duplication rate of the samples:
|
| 222 |
+
- `d1` the samples inside the training set are unique
|
| 223 |
+
- `d2` the samples inside the training set are present two times
|
| 224 |
+
- `d3` the samples inside the training set are present three times
|
| 225 |
+
- `dg3` the samples inside the training set are present more than three times
|