| # EvoCodeBench | |
| EvoCodeBench is an evolutionary code generation benchmark aligned with real-world code repositories. | |
| Details of EvoCodeBench can be found in our paper "EvoCodeBench: An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories". | |
| **Paper:** https://arxiv.org/pdf/2404.00599.pdf | |
| **Data and Source Code:** https://github.com/seketeam/EvoCodeBench | |
| ## Features | |
| Compared to existing benchmarks (e.g., HumanEval), EvoCodeBench has the following features: | |
| - Aligned with real-world repositories in multiple dimensions, e.g., code distributions and dependency distributions. | |
| - Comprehensive annotations (e.g., requirements, code, dependencies, and repos) and robust metrics (Pass@k and Recall@k). | |
| - An evolving benchmark to avoid data leakage. The first version - EvoCodeBench-2403 has been released. It contains 275 samples from 25 real-world repositories (created from 10/2023 to 2/2024). | |
| A sample in EvoCodeBench is shown as follows. | |
|  | |
| ## Repository-level Code Generation | |
| Based on EvoCodeBench, we propose repository-level code generation and evaluate 10 popular LLMs (e.g., gpt-4, gpt-3.5, DeepSeek Coder, StarCoder 2, CodeLLaMa, Gemma, and Qwen 1.5). All prompts and models' completions have been released for further community analysis. | |
| The evaluation results are shown as follows. | |
|  |