Spaces:
Runtime error
Runtime error
github-actions
commited on
Commit
·
54ad4a5
1
Parent(s):
3d10a5a
Auto files update [main]
Browse files- README.md +116 -6
- app.py +6 -0
- codebleu.py +124 -0
- requirements.txt +2 -0
- tests.py +17 -0
README.md
CHANGED
|
@@ -1,12 +1,122 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
| 6 |
sdk: gradio
|
| 7 |
-
sdk_version: 3.
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: codebleu
|
| 3 |
+
tags:
|
| 4 |
+
- evaluate
|
| 5 |
+
- metric
|
| 6 |
+
- code
|
| 7 |
+
- codebleu
|
| 8 |
+
description: "Unofficial `CodeBLEU` implementation with Linux and MacOS supports available with PyPI and HF HUB."
|
| 9 |
sdk: gradio
|
| 10 |
+
sdk_version: 3.19.1
|
| 11 |
app_file: app.py
|
| 12 |
pinned: false
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Metric Card for codebleu
|
| 16 |
+
|
| 17 |
+
***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
|
| 18 |
+
|
| 19 |
+
## Metric Description
|
| 20 |
+
Unofficial `CodeBLEU` implementation with Linux and MacOS supports available with PyPI and HF HUB.
|
| 21 |
+
|
| 22 |
+
> An ideal evaluation metric should consider the grammatical correctness and the logic correctness.
|
| 23 |
+
> We propose weighted n-gram match and syntactic AST match to measure grammatical correctness, and introduce semantic data-flow match to calculate logic correctness.
|
| 24 |
+
> 
|
| 25 |
+
(from [CodeXGLUE](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans/evaluator/CodeBLEU) repo)
|
| 26 |
+
|
| 27 |
+
In a nutshell, `CodeBLEU` is a weighted combination of `n-gram match (BLEU)`, `weighted n-gram match (BLEU-weighted)`, `AST match` and `data-flow match` scores.
|
| 28 |
+
|
| 29 |
+
The metric has shown higher correlation with human evaluation than `BLEU` and `accuracy` metrics.
|
| 30 |
+
|
| 31 |
+
## How to Use
|
| 32 |
+
*Give general statement of how to use the metric*
|
| 33 |
+
|
| 34 |
+
*Provide simplest possible example for using the metric*
|
| 35 |
+
|
| 36 |
+
### Inputs
|
| 37 |
+
|
| 38 |
+
- `refarences` (`list[str]` or `list[list[str]]`): reference code
|
| 39 |
+
- `predictions` (`list[str]`) predicted code
|
| 40 |
+
- `lang` (`str`): code language, see `codebleu.AVAILABLE_LANGS` for available languages (python, c_sharp, java at the moment)
|
| 41 |
+
- `weights` (tuple[float,float,float,float]): weights of the `ngram_match`, `weighted_ngram_match`, `syntax_match`, and `dataflow_match` respectively, defaults to `(0.25, 0.25, 0.25, 0.25)`
|
| 42 |
+
- `tokenizer` (`callable`): to split code string to tokens, defaults to `s.split()`
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
### Output Values
|
| 46 |
+
|
| 47 |
+
[//]: # (*Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*)
|
| 48 |
+
|
| 49 |
+
[//]: # (*State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*)
|
| 50 |
+
|
| 51 |
+
The metric outputs the `dict[str, float]` with following fields:
|
| 52 |
+
- `codebleu`: the final `CodeBLEU` score
|
| 53 |
+
- `ngram_match_score`: `ngram_match` score (BLEU)
|
| 54 |
+
- `weighted_ngram_match_score`: `weighted_ngram_match` score (BLEU-weighted)
|
| 55 |
+
- `syntax_match_score`: `syntax_match` score (AST match)
|
| 56 |
+
- `dataflow_match_score`: `dataflow_match` score
|
| 57 |
+
|
| 58 |
+
Each of the scores is in range `[0, 1]`, where `1` is the best score.
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
### Examples
|
| 62 |
+
|
| 63 |
+
[//]: # (*Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*)
|
| 64 |
+
|
| 65 |
+
Using pip package (`pip install codebleu`):
|
| 66 |
+
```python
|
| 67 |
+
from codebleu import calc_codebleu
|
| 68 |
+
|
| 69 |
+
prediction = "def add ( a , b ) :\n return a + b"
|
| 70 |
+
reference = "def sum ( first , second ) :\n return second + first"
|
| 71 |
+
|
| 72 |
+
result = calc_codebleu([reference], [prediction], lang="python", weights=(0.25, 0.25, 0.25, 0.25), tokenizer=None)
|
| 73 |
+
print(result)
|
| 74 |
+
# {
|
| 75 |
+
# 'codebleu': 0.5537,
|
| 76 |
+
# 'ngram_match_score': 0.1041,
|
| 77 |
+
# 'weighted_ngram_match_score': 0.1109,
|
| 78 |
+
# 'syntax_match_score': 1.0,
|
| 79 |
+
# 'dataflow_match_score': 1.0
|
| 80 |
+
# }
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
Or using `evaluate` library (package required):
|
| 84 |
+
```python
|
| 85 |
+
import evaluate
|
| 86 |
+
metric = evaluate.load("dvitel/codebleu")
|
| 87 |
+
|
| 88 |
+
prediction = "def add ( a , b ) :\n return a + b"
|
| 89 |
+
reference = "def sum ( first , second ) :\n return second + first"
|
| 90 |
+
|
| 91 |
+
result = metric.compute([reference], [prediction], lang="python", weights=(0.25, 0.25, 0.25, 0.25), tokenizer=None)
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
Note: `language` is required;
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
## Limitations and Bias
|
| 98 |
+
|
| 99 |
+
[//]: # (*Note any known limitations or biases that the metric has, with links and references if possible.*)
|
| 100 |
+
|
| 101 |
+
As this library require `so` file compilation it is platform dependent.
|
| 102 |
+
|
| 103 |
+
Currently available for Linux (manylinux) and MacOS on Python 3.8+.
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
## Citation
|
| 107 |
+
```bibtex
|
| 108 |
+
@misc{ren2020codebleu,
|
| 109 |
+
title={CodeBLEU: a Method for Automatic Evaluation of Code Synthesis},
|
| 110 |
+
author={Shuo Ren and Daya Guo and Shuai Lu and Long Zhou and Shujie Liu and Duyu Tang and Neel Sundaresan and Ming Zhou and Ambrosio Blanco and Shuai Ma},
|
| 111 |
+
year={2020},
|
| 112 |
+
eprint={2009.10297},
|
| 113 |
+
archivePrefix={arXiv},
|
| 114 |
+
primaryClass={cs.SE}
|
| 115 |
+
}
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
## Further References
|
| 119 |
+
|
| 120 |
+
This implementation is Based on original [CodeXGLUE/CodeBLEU](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans/evaluator/CodeBLEU) code -- refactored, build for macos, tested and fixed multiple crutches to make it more usable.
|
| 121 |
+
|
| 122 |
+
The source code is available at GitHub [k4black/codebleu](https://github.com/k4black/codebleu) repository.
|
app.py
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import evaluate
|
| 2 |
+
from evaluate.utils import launch_gradio_widget
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
module = evaluate.load("k4black/codebleu")
|
| 6 |
+
launch_gradio_widget(module)
|
codebleu.py
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
| 2 |
+
#
|
| 3 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
| 4 |
+
# you may not use this file except in compliance with the License.
|
| 5 |
+
# You may obtain a copy of the License at
|
| 6 |
+
#
|
| 7 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
| 8 |
+
#
|
| 9 |
+
# Unless required by applicable law or agreed to in writing, software
|
| 10 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
| 11 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 12 |
+
# See the License for the specific language governing permissions and
|
| 13 |
+
# limitations under the License.
|
| 14 |
+
"""TODO: Add a description here."""
|
| 15 |
+
|
| 16 |
+
import evaluate
|
| 17 |
+
import datasets
|
| 18 |
+
from codebleu import calc_codebleu
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
# TODO: Add BibTeX citation
|
| 22 |
+
_CITATION = """\
|
| 23 |
+
@misc{ren2020codebleu,
|
| 24 |
+
title={CodeBLEU: a Method for Automatic Evaluation of Code Synthesis},
|
| 25 |
+
author={Shuo Ren and Daya Guo and Shuai Lu and Long Zhou and Shujie Liu and Duyu Tang and Neel Sundaresan and Ming Zhou and Ambrosio Blanco and Shuai Ma},
|
| 26 |
+
year={2020},
|
| 27 |
+
eprint={2009.10297},
|
| 28 |
+
archivePrefix={arXiv},
|
| 29 |
+
primaryClass={cs.SE}
|
| 30 |
+
}
|
| 31 |
+
"""
|
| 32 |
+
|
| 33 |
+
# TODO: Add description of the module here
|
| 34 |
+
_DESCRIPTION = """\
|
| 35 |
+
Unofficial `CodeBLEU` implementation with Linux and MacOS supports available with PyPI and HF HUB.
|
| 36 |
+
|
| 37 |
+
Based on original [CodeXGLUE/CodeBLEU](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans/evaluator/CodeBLEU) code -- refactored, build for macos, tested and fixed multiple crutches to make it more usable.
|
| 38 |
+
"""
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
# TODO: Add description of the arguments of the module here
|
| 42 |
+
_KWARGS_DESCRIPTION = """
|
| 43 |
+
Calculate a weighted combination of `n-gram match (BLEU)`, `weighted n-gram match (BLEU-weighted)`, `AST match` and `data-flow match` scores.
|
| 44 |
+
|
| 45 |
+
Args:
|
| 46 |
+
predictions: list of predictions to score. Each predictions
|
| 47 |
+
should be a string with tokens separated by spaces.
|
| 48 |
+
references: list of reference for each prediction. Each
|
| 49 |
+
reference should be a string with tokens separated by spaces.
|
| 50 |
+
language: programming language in ['java','js','c_sharp','php','go','python','ruby'].
|
| 51 |
+
weights: tuple of 4 floats to use as weights for scores. Defaults to (0.25, 0.25, 0.25, 0.25).
|
| 52 |
+
Returns:
|
| 53 |
+
codebleu: resulting `CodeBLEU` score,
|
| 54 |
+
ngram_match_score: resulting `n-gram match (BLEU)` score,
|
| 55 |
+
weighted_ngram_match_score: resulting `weighted n-gram match (BLEU-weighted)` score,
|
| 56 |
+
syntax_match_score: resulting `AST match` score,
|
| 57 |
+
dataflow_match_score: resulting `data-flow match` score,
|
| 58 |
+
Examples:
|
| 59 |
+
>>> metric = evaluate.load("k4black/codebleu")
|
| 60 |
+
>>> ref = "def sum ( first , second ) :\n return second + first"
|
| 61 |
+
>>> pred = "def add ( a , b ) :\n return a + b"
|
| 62 |
+
>>> results = metric.compute(references=[ref], predictions=[pred], language="python")
|
| 63 |
+
>>> print(results)
|
| 64 |
+
"""
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
|
| 68 |
+
class codebleu(evaluate.Metric):
|
| 69 |
+
"""CodeBLEU metric from CodexGLUE"""
|
| 70 |
+
|
| 71 |
+
def _info(self):
|
| 72 |
+
# TODO: Specifies the evaluate.EvaluationModuleInfo object
|
| 73 |
+
return evaluate.MetricInfo(
|
| 74 |
+
# This is the description that will appear on the modules page.
|
| 75 |
+
module_type="metric",
|
| 76 |
+
description=_DESCRIPTION,
|
| 77 |
+
citation=_CITATION,
|
| 78 |
+
inputs_description=_KWARGS_DESCRIPTION,
|
| 79 |
+
# This defines the format of each prediction and reference
|
| 80 |
+
features=[
|
| 81 |
+
datasets.Features(
|
| 82 |
+
{
|
| 83 |
+
"predictions": datasets.Value("string", id="sequence"),
|
| 84 |
+
"references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"),
|
| 85 |
+
# "lang": datasets.Value("string"),
|
| 86 |
+
# "weights": datasets.Value("string"),
|
| 87 |
+
# "tokenizer": datasets.Value("string"),
|
| 88 |
+
}
|
| 89 |
+
),
|
| 90 |
+
datasets.Features(
|
| 91 |
+
{
|
| 92 |
+
"predictions": datasets.Value("string", id="sequence"),
|
| 93 |
+
"references": datasets.Value("string", id="sequence"),
|
| 94 |
+
# "lang": datasets.Value("string"),
|
| 95 |
+
# "weights": datasets.Value("string"),
|
| 96 |
+
# "tokenizer": datasets.Value("string"),
|
| 97 |
+
}
|
| 98 |
+
),
|
| 99 |
+
],
|
| 100 |
+
# Homepage of the module for documentation
|
| 101 |
+
homepage="https://github.com/k4black/codebleu",
|
| 102 |
+
# Additional links to the codebase or references
|
| 103 |
+
codebase_urls=["https://github.com/k4black/codebleu"],
|
| 104 |
+
reference_urls=[
|
| 105 |
+
"https://github.com/k4black/codebleu",
|
| 106 |
+
"https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans/evaluator",
|
| 107 |
+
"https://arxiv.org/abs/2009.10297",
|
| 108 |
+
],
|
| 109 |
+
)
|
| 110 |
+
|
| 111 |
+
def _download_and_prepare(self, dl_manager):
|
| 112 |
+
"""Optional: download external resources useful to compute the scores"""
|
| 113 |
+
# TODO: Download external resources if needed
|
| 114 |
+
pass
|
| 115 |
+
|
| 116 |
+
def _compute(self, predictions, references, lang, weights=(0.25, 0.25, 0.25, 0.25), tokenizer=None):
|
| 117 |
+
"""Returns the scores"""
|
| 118 |
+
return calc_codebleu(
|
| 119 |
+
references=references,
|
| 120 |
+
predictions=predictions,
|
| 121 |
+
lang=lang,
|
| 122 |
+
weights=weights,
|
| 123 |
+
tokenizer=tokenizer,
|
| 124 |
+
)
|
requirements.txt
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
git+https://github.com/huggingface/evaluate@main
|
| 2 |
+
codebleu
|
tests.py
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
test_cases = [
|
| 2 |
+
{
|
| 3 |
+
"predictions": [0, 0],
|
| 4 |
+
"references": [1, 1],
|
| 5 |
+
"result": {"metric_score": 0}
|
| 6 |
+
},
|
| 7 |
+
{
|
| 8 |
+
"predictions": [1, 1],
|
| 9 |
+
"references": [1, 1],
|
| 10 |
+
"result": {"metric_score": 1}
|
| 11 |
+
},
|
| 12 |
+
{
|
| 13 |
+
"predictions": [1, 0],
|
| 14 |
+
"references": [1, 1],
|
| 15 |
+
"result": {"metric_score": 0.5}
|
| 16 |
+
}
|
| 17 |
+
]
|