Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -47,8 +47,20 @@ dataset_info:
|
|
| 47 |
|
| 48 |
<!-- Provide a quick summary of the dataset. -->
|
| 49 |
|
| 50 |
-
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
Code: https://github.com/gersteinlab/LocAgent
|
| 54 |
|
|
@@ -57,7 +69,8 @@ You can easily load LOC-BENCH using Hugging Face's datasets library:
|
|
| 57 |
```
|
| 58 |
from datasets import load_dataset
|
| 59 |
|
| 60 |
-
dataset = load_dataset("czlll/Loc-
|
|
|
|
| 61 |
```
|
| 62 |
## 📄 Citation
|
| 63 |
If you use LOC-BENCH in your research, please cite our paper:
|
|
|
|
| 47 |
|
| 48 |
<!-- Provide a quick summary of the dataset. -->
|
| 49 |
|
| 50 |
+
This is a refined benchmark for evaluating code localization methods.
|
| 51 |
+
Compared to the original version, V2 improves data quality by filtering out examples that do not involve any function-level code modifications.
|
| 52 |
+
Each entry in the dataset corresponds to a real-world code change, providing rich contextual information for studying bug localization, feature location, and automated code understanding tasks.
|
| 53 |
+
We recommend using Loc-Bench_V2 for a more accurate and reliable evaluation of code localization performance.
|
| 54 |
+
|
| 55 |
+
The table below shows the distribution of categories in the dataset.
|
| 56 |
+
|
| 57 |
+
| category | count |
|
| 58 |
+
|:---------|:---------|
|
| 59 |
+
| Bug Report | 275 |
|
| 60 |
+
| Feature Request | 216 |
|
| 61 |
+
| Performance Issue | 140 |
|
| 62 |
+
| Security Vulnerability | 29 |
|
| 63 |
+
|
| 64 |
|
| 65 |
Code: https://github.com/gersteinlab/LocAgent
|
| 66 |
|
|
|
|
| 69 |
```
|
| 70 |
from datasets import load_dataset
|
| 71 |
|
| 72 |
+
dataset = load_dataset("czlll/Loc-Bench_V2", split="test")
|
| 73 |
+
|
| 74 |
```
|
| 75 |
## 📄 Citation
|
| 76 |
If you use LOC-BENCH in your research, please cite our paper:
|