Commit ·
1643292
1
Parent(s): 962b349
Update README.md
Browse files
README.md
CHANGED
|
@@ -46,12 +46,11 @@ DatasetDict({
|
|
| 46 |
```
|
| 47 |
## How to evaluate your models
|
| 48 |
To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
|
| 49 |
-
1. Clone and setup [Code Generation LM Evaluation Harness](https://github.com/
|
| 50 |
-
2.
|
| 51 |
-
3. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
|
| 52 |
```console
|
| 53 |
-
|
| 54 |
-
|
| 55 |
accelerate launch main.py \
|
| 56 |
--model codellama/CodeLlama-7b-Python-hf \
|
| 57 |
--max_length_generation 512 \
|
|
@@ -65,7 +64,7 @@ accelerate launch main.py \
|
|
| 65 |
--save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
|
| 66 |
--metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
|
| 67 |
```
|
| 68 |
-
|
| 69 |
```python
|
| 70 |
"humaneval_ru": {
|
| 71 |
"pass@1": 0.35,
|
|
|
|
| 46 |
```
|
| 47 |
## How to evaluate your models
|
| 48 |
To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
|
| 49 |
+
1. Clone and setup [Our fork of Code Generation LM Evaluation Harness](https://github.com/NLP-Core-Team/bigcode-evaluation-harness)
|
| 50 |
+
2. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
|
|
|
|
| 51 |
```console
|
| 52 |
+
mkdir -p ./outs/humaneval_ru
|
| 53 |
+
mkdir -p ./results/humaneval_ru
|
| 54 |
accelerate launch main.py \
|
| 55 |
--model codellama/CodeLlama-7b-Python-hf \
|
| 56 |
--max_length_generation 512 \
|
|
|
|
| 64 |
--save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
|
| 65 |
--metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
|
| 66 |
```
|
| 67 |
+
3. Resulting metrics of Codellama-7b-Python should be
|
| 68 |
```python
|
| 69 |
"humaneval_ru": {
|
| 70 |
"pass@1": 0.35,
|