comp agent
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ This dataset supports research aligned with the methodology described in our pap
|
|
| 13 |
## Key Features
|
| 14 |
|
| 15 |
- ✅ **LLVM IR representation** of each function (field: `llvm_ir_function`)
|
| 16 |
-
- ✅ Includes **train**, **validation**, and **test** splits
|
| 17 |
- ✅ Vulnerability labels (`label`) for supervised learning
|
| 18 |
- ✅ Metadata about original source (`dataset`, `file`, `fun_name`)
|
| 19 |
- ✅ Structured as **Parquet** files for fast loading and processing
|
|
@@ -29,6 +29,14 @@ Each record contains:
|
|
| 29 |
- `label`: Binary label indicating vulnerability (`1` for vulnerable, `0` for non-vulnerable)
|
| 30 |
- `split`: Dataset split (`train`, `validation`, `test`)
|
| 31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
## Format
|
| 33 |
|
| 34 |
This dataset is provided in [Apache Parquet](https://parquet.apache.org/) format under the `default` configuration. It follows the [Croissant schema](https://mlcommons.org/croissant/) and includes three predefined splits.
|
|
@@ -43,7 +51,6 @@ from datasets import load_dataset
|
|
| 43 |
# Load a specific split
|
| 44 |
train_ds = load_dataset("compAgent/CompRealVul_LLVM", split="train")
|
| 45 |
print(train_ds[0])
|
| 46 |
-
```
|
| 47 |
|
| 48 |
## Example
|
| 49 |
```json
|
|
|
|
| 13 |
## Key Features
|
| 14 |
|
| 15 |
- ✅ **LLVM IR representation** of each function (field: `llvm_ir_function`)
|
| 16 |
+
- ✅ Includes **train**, **validation**, and **test** splits (see below)
|
| 17 |
- ✅ Vulnerability labels (`label`) for supervised learning
|
| 18 |
- ✅ Metadata about original source (`dataset`, `file`, `fun_name`)
|
| 19 |
- ✅ Structured as **Parquet** files for fast loading and processing
|
|
|
|
| 29 |
- `label`: Binary label indicating vulnerability (`1` for vulnerable, `0` for non-vulnerable)
|
| 30 |
- `split`: Dataset split (`train`, `validation`, `test`)
|
| 31 |
|
| 32 |
+
## Split Information
|
| 33 |
+
|
| 34 |
+
This dataset is split into **train**, **validation**, and **test** sets, following the exact partitioning strategy used in the experiments described in our paper. The split ensures a fair evaluation of generalization performance by separating functions into disjoint sets with no overlap. This allows researchers to directly reproduce our results or compare against them under consistent conditions.
|
| 35 |
+
|
| 36 |
+
- `train`: Used to fit model parameters
|
| 37 |
+
- `validation`: Used for model selection and hyperparameter tuning
|
| 38 |
+
- `test`: Used exclusively for final evaluation and benchmarking
|
| 39 |
+
|
| 40 |
## Format
|
| 41 |
|
| 42 |
This dataset is provided in [Apache Parquet](https://parquet.apache.org/) format under the `default` configuration. It follows the [Croissant schema](https://mlcommons.org/croissant/) and includes three predefined splits.
|
|
|
|
| 51 |
# Load a specific split
|
| 52 |
train_ds = load_dataset("compAgent/CompRealVul_LLVM", split="train")
|
| 53 |
print(train_ds[0])
|
|
|
|
| 54 |
|
| 55 |
## Example
|
| 56 |
```json
|