metadata
license: mit
task_categories:
- text-generation
MultiFileTest (ProjectTest)
MultiFileTest (also referred to as ProjectTest) is a multi-file-level benchmark for unit test generation covering Python, Java, and JavaScript. It features 20 moderate-sized and high-quality projects per language, designed to evaluate Large Language Models (LLMs) on more practical and challenging multi-file-level codebases compared to standard single-file benchmarks.
- Paper: MultiFileTest: A Multi-File-Level LLM Unit Test Generation Benchmark and Impact of Error Fixing Mechanisms
- GitHub Repository: https://github.com/YiboWANG214/ProjectTest
Dataset Statistics
| Language | Avg. #Files | Avg. LOC | Avg. #Stars | Avg. #Forks |
|---|---|---|---|---|
| Python | 6.10 | 654.60 | 5810.30 | 996.90 |
| Java | 4.65 | 282.60 | 3306.05 | 1347.65 |
| JavaScript | 4.00 | 558.05 | 17242.30 | 5476.45 |
Sample Usage
You can load this dataset using the Hugging Face datasets library:
from datasets import load_dataset
# Login using e.g. `huggingface-cli login` to access this dataset
ds = load_dataset("yibowang214/ProjectTest")
Citation
If you find this work useful, please consider citing:
@article{wang2025projecttest,
title={ProjectTest: A Project-level Unit Test Generation Benchmark and Impact of Error Fixing Mechanisms},
author={Wang, Yibo and Xia, Congying and Zhao, Wenting and Du, Jiangshu and Miao, Chunyu and Deng, Zhongfen and Yu, Philip S and Xing, Chen},
journal={arXiv preprint arXiv:2502.06556},
year={2025}
}