Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
deniskokosss commited on
Commit
dfe2dea
·
1 Parent(s): 1643292

Added benchmark results

Browse files
Files changed (1) hide show
  1. README.md +31 -6
README.md CHANGED
@@ -46,11 +46,12 @@ DatasetDict({
46
  ```
47
  ## How to evaluate your models
48
  To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
49
- 1. Clone and setup [Our fork of Code Generation LM Evaluation Harness](https://github.com/NLP-Core-Team/bigcode-evaluation-harness)
50
- 2. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
 
51
  ```console
52
- mkdir -p ./outs/humaneval_ru
53
- mkdir -p ./results/humaneval_ru
54
  accelerate launch main.py \
55
  --model codellama/CodeLlama-7b-Python-hf \
56
  --max_length_generation 512 \
@@ -64,10 +65,34 @@ accelerate launch main.py \
64
  --save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
65
  --metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
66
  ```
67
- 3. Resulting metrics of Codellama-7b-Python should be
68
  ```python
69
  "humaneval_ru": {
70
  "pass@1": 0.35,
71
  "pass@10": 0.5122803695209872
72
  },
73
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ```
47
  ## How to evaluate your models
48
  To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
49
+ 1. Clone and setup [Code Generation LM Evaluation Harness](https://github.com/bigcode-project/bigcode-evaluation-harness)
50
+ 2. Copy our files lm_eval/tasks/humaneval_ru.py and lm_eval/tasks/__init__.py to lm_eval/tasks of the cloned repo
51
+ 3. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
52
  ```console
53
+ # mkdir -p ./outs/humaneval_ru
54
+ # mkdir -p ./results/humaneval_ru
55
  accelerate launch main.py \
56
  --model codellama/CodeLlama-7b-Python-hf \
57
  --max_length_generation 512 \
 
65
  --save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
66
  --metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
67
  ```
68
+ 4. Resulting metrics of Codellama-7b-Python should be
69
  ```python
70
  "humaneval_ru": {
71
  "pass@1": 0.35,
72
  "pass@10": 0.5122803695209872
73
  },
74
+ ```
75
+ # Benchmark
76
+ [Starcoder](https://huggingface.co/bigcode/starcoder) and [Codellama](https://huggingface.co/codellama/CodeLlama-7b-hf) models evaluations on HumanEval_Ru and HumanEval are presented in the table below. For further information on Pass@1 and Pass@10 please refer to [original paper](https://arxiv.org/abs/2107.03374).
77
+
78
+ | model | RU Pass@1 | RU Pass@10 | EN Pass@1 | EN Pass@10 |
79
+ |:------------------------|--------------------------:|---------------------------:|--------------------------:|---------------------------:|
80
+ | starcoderbase-1b | 0.1420 | 0.1801 | 0.1509 | 0.2045 |
81
+ | starcoderbase-3b | 0.1924 | 0.2606 | 0.2137 | 0.3289 |
82
+ | starcoderbase-7b | 0.2515 | 0.3359 | 0.2868 | 0.3852 |
83
+ | starcoderbase-15b | 0.2676 | 0.3872 | 0.3036 | 0.4611 |
84
+ | starcoder-15b-Python | 0.3103 | 0.4132 | 0.3353 | 0.4931 |
85
+ | CodeLlama-7b-hf | 0.2673 | 0.3688 | 0.2975 | 0.4351 |
86
+ | CodeLlama-7b-Python-hf | 0.3500 | 0.5122 | 0.3960 | 0.5761 |
87
+ | CodeLlama-13b-hf | 0.3380 | 0.4884 | 0.3557 | 0.5489 |
88
+ | CodeLlama-13b-Python-hf | 0.4380 | 0.5796 | 0.4301 | 0.6226 |
89
+
90
+ <details>
91
+ <summary> Generation parameters to reproduce results in the table </summary>
92
+ 'do_sample': True,
93
+ 'temperature': 0.2,
94
+ 'top_k': 0,
95
+ 'top_p': 0.95,
96
+ 'n_samples': 20,
97
+ 'max_length_generation': 512,
98
+ </details>