Update README.md
Browse files
README.md
CHANGED
|
@@ -8,10 +8,12 @@ size_categories:
|
|
| 8 |
---
|
| 9 |
|
| 10 |
<h1 style="text-align: center;">Abstract</h1>
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
|
|
|
|
|
|
| 15 |
<br>
|
| 16 |
|
| 17 |
<h1 style="text-align: center;">PythonSaga</h1>
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
<h1 style="text-align: center;">Abstract</h1>
|
| 11 |
+
<p>
|
| 12 |
+
Driven by the surge in code generation using large language models (LLMs), numerous benchmarks have emerged to evaluate these LLMs' capabilities. We conducted a large-scale human evaluation of HumanEval and MBPP, two popular benchmarks for Python code generation, analyzing their diversity and difficulty.
|
| 13 |
+
Our findings unveil a critical bias towards a limited set of programming concepts, neglecting most of the other concepts entirely. Furthermore, we uncover a worrying prevalence of easy tasks that can inflate model performance estimations. To address these limitations, we propose a novel benchmark, PythonSaga, featuring 185 hand-crafted prompts in a balanced representation of 38 programming concepts across diverse difficulty levels.
|
| 14 |
+
The robustness of our benchmark is demonstrated by the poor performance of existing Code-LLMs. The code and dataset are openly available to the NLP community at
|
| 15 |
+
<a href="https://github.com/PythonSaga/PythonSaga" target="_blank">https://github.com/PythonSaga/PythonSaga</a>.
|
| 16 |
+
</p>
|
| 17 |
<br>
|
| 18 |
|
| 19 |
<h1 style="text-align: center;">PythonSaga</h1>
|