# Fusion: Enhancing Code Understanding through Structured Evaluation

📑 Paper    |    🌐 Project Page    |    💾 Released Resources    |    📦 Repo We release the raw data for our processed CodeEval dataset, derived from the original dataset maintained by the BigCode team. The data format for each line in the `code_samples_filtered_v3.jsonl` is as follows: ``` { "code_snippet": , "language": , "complexity_score": , "test_cases": [ { "input": , "expected_output": }, ... ], "source_repo": , "annotations": } ``` Some entries may have empty `test_cases` due to execution timeout or resource constraints during automated test generation. *Note: The complexity scoring algorithm is documented in our technical appendix. Future releases will include improved annotation coverage. **Citation** If you use this dataset, please cite: ```bibtex @article{Kocetkov2022TheStack, title={The Stack: 3 TB of permissively licensed source code}, author={Kocetkov, Denis and Li, Raymond and Ben Allal, Loubna and Li, Jia and Mou,Chenghao and Muñoz Ferrandis, Carlos and Jernite, Yacine and Mitchell, Margaret and Hughes, Sean and Wolf, Thomas and Bahdanau, Dzmitry and von Werra, Leandro and de Vries, Harm}, journal={Preprint}, year={2022} } ```