File size: 1,771 Bytes
d9aa259
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: mit
task_categories:
- text-generation
---

# MultiFileTest (ProjectTest)

MultiFileTest (also referred to as ProjectTest) is a multi-file-level benchmark for unit test generation covering Python, Java, and JavaScript. It features 20 moderate-sized and high-quality projects per language, designed to evaluate Large Language Models (LLMs) on more practical and challenging multi-file-level codebases compared to standard single-file benchmarks.

- **Paper:** [MultiFileTest: A Multi-File-Level LLM Unit Test Generation Benchmark and Impact of Error Fixing Mechanisms](https://huggingface.co/papers/2502.06556)
- **GitHub Repository:** [https://github.com/YiboWANG214/ProjectTest](https://github.com/YiboWANG214/ProjectTest)

## Dataset Statistics

| Language   | Avg. #Files | Avg. LOC | Avg. #Stars | Avg. #Forks |
|------------|------------|----------|-------------|-------------|
| Python     | 6.10       | 654.60   | 5810.30     | 996.90      |
| Java       | 4.65       | 282.60   | 3306.05     | 1347.65     |
| JavaScript | 4.00       | 558.05   | 17242.30    | 5476.45     |

## Sample Usage

You can load this dataset using the Hugging Face `datasets` library:

```python
from datasets import load_dataset

# Login using e.g. `huggingface-cli login` to access this dataset
ds = load_dataset("yibowang214/ProjectTest")
```

## Citation

If you find this work useful, please consider citing:

```bibtex
@article{wang2025projecttest,
  title={ProjectTest: A Project-level Unit Test Generation Benchmark and Impact of Error Fixing Mechanisms},
  author={Wang, Yibo and Xia, Congying and Zhao, Wenting and Du, Jiangshu and Miao, Chunyu and Deng, Zhongfen and Yu, Philip S and Xing, Chen},
  journal={arXiv preprint arXiv:2502.06556},
  year={2025}
}
```