Ki-Seki loubnabnl HF Staff commited on
Commit
907871c
·
0 Parent(s):

Duplicate from loubnabnl/humaneval_infilling

Browse files

Co-authored-by: Loubna Ben Allal <loubnabnl@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.npy filter=lfs diff=lfs merge=lfs -text
14
+ *.npz filter=lfs diff=lfs merge=lfs -text
15
+ *.onnx filter=lfs diff=lfs merge=lfs -text
16
+ *.ot filter=lfs diff=lfs merge=lfs -text
17
+ *.parquet filter=lfs diff=lfs merge=lfs -text
18
+ *.pb filter=lfs diff=lfs merge=lfs -text
19
+ *.pickle filter=lfs diff=lfs merge=lfs -text
20
+ *.pkl filter=lfs diff=lfs merge=lfs -text
21
+ *.pt filter=lfs diff=lfs merge=lfs -text
22
+ *.pth filter=lfs diff=lfs merge=lfs -text
23
+ *.rar filter=lfs diff=lfs merge=lfs -text
24
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
26
+ *.tflite filter=lfs diff=lfs merge=lfs -text
27
+ *.tgz filter=lfs diff=lfs merge=lfs -text
28
+ *.wasm filter=lfs diff=lfs merge=lfs -text
29
+ *.xz filter=lfs diff=lfs merge=lfs -text
30
+ *.zip filter=lfs diff=lfs merge=lfs -text
31
+ *.zst filter=lfs diff=lfs merge=lfs -text
32
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
33
+ # Audio files - uncompressed
34
+ *.pcm filter=lfs diff=lfs merge=lfs -text
35
+ *.sam filter=lfs diff=lfs merge=lfs -text
36
+ *.raw filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - compressed
38
+ *.aac filter=lfs diff=lfs merge=lfs -text
39
+ *.flac filter=lfs diff=lfs merge=lfs -text
40
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
41
+ *.ogg filter=lfs diff=lfs merge=lfs -text
42
+ *.wav filter=lfs diff=lfs merge=lfs -text
43
+ # Image files - uncompressed
44
+ *.bmp filter=lfs diff=lfs merge=lfs -text
45
+ *.gif filter=lfs diff=lfs merge=lfs -text
46
+ *.png filter=lfs diff=lfs merge=lfs -text
47
+ *.tiff filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - compressed
49
+ *.jpg filter=lfs diff=lfs merge=lfs -text
50
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
51
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - code
8
+ license:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ source_datasets:
13
+ - original
14
+ task_categories:
15
+ - text2text-generation
16
+ task_ids: []
17
+ pretty_name: OpenAI HumanEval-Infilling
18
+ tags:
19
+ - code-generation
20
+ ---
21
+
22
+ # HumanEval-Infilling
23
+
24
+
25
+ ## Dataset Description
26
+
27
+ - **Repository:** https://github.com/openai/human-eval-infilling
28
+ - **Paper:** https://arxiv.org/pdf/2207.14255
29
+
30
+ ## Dataset Summary
31
+
32
+ [HumanEval-Infilling](https://github.com/openai/human-eval-infilling) is a benchmark for infilling tasks, derived from [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark for the evaluation of code generation models.
33
+
34
+ ## Dataset Structure
35
+ To load the dataset you need to specify a subset. By default `HumanEval-SingleLineInfilling` is loaded.
36
+
37
+ ```python
38
+ from datasets import load_dataset
39
+ ds = load_dataset("humaneval_infilling", "HumanEval-RandomSpanInfilling")
40
+
41
+ DatasetDict({
42
+ test: Dataset({
43
+ features: ['task_id', 'entry_point', 'prompt', 'suffix', 'canonical_solution', 'test'],
44
+ num_rows: 1640
45
+ })
46
+ })
47
+ ```
48
+
49
+ ## Subsets
50
+
51
+ This dataset has 4 subsets: HumanEval-MultiLineInfilling, HumanEval-SingleLineInfilling, HumanEval-RandomSpanInfilling, HumanEval-RandomSpanInfillingLight.
52
+ The single-line, multi-line, random span infilling and its light version have 1033, 5815, 1640 and 164 tasks, respectively.
53
+
54
+ ## Citation
55
+
56
+ ```
57
+ @article{bavarian2022efficient,
58
+ title={Efficient Training of Language Models to Fill in the Middle},
59
+ author={Bavarian, Mohammad and Jun, Heewoo and Tezak, Nikolas and Schulman, John and McLeavey, Christine and Tworek, Jerry and Chen, Mark},
60
+ journal={arXiv preprint arXiv:2207.14255},
61
+ year={2022}
62
+ }
63
+ ```
data/HumanEval-MultiLineInfilling.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b49f312f0a781420a4ae54f5e73176009b9cfc52731e3a0b9be726b451032d1
3
+ size 10487245
data/HumanEval-RandomSpanInfilling.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f15be2d4a479b504072c64e79702df184010f13bdf75f8b67ce08e054c48e26
3
+ size 2203584
data/HumanEval-RandomSpanInfillingLight.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbb2b36f90311a3a7d844b6361852bb33e3ab11ab3bca2b7e85bfb9bf78c5891
3
+ size 221823
data/HumanEval-SingleLineInfilling.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fffc71ec2f1674372fcc177511f92312f1a27a9eacd8e43255c9f5ee9eca8c8
3
+ size 1647941
humaneval_infilling.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+
4
+ logger = datasets.logging.get_logger(__name__)
5
+
6
+ _CITATION = """\
7
+ @article{bavarian2022efficient,
8
+ title={Efficient Training of Language Models to Fill in the Middle},
9
+ author={Bavarian, Mohammad and Jun, Heewoo and Tezak, Nikolas and Schulman, John and McLeavey, Christine and Tworek, Jerry and Chen, Mark},
10
+ journal={arXiv preprint arXiv:2207.14255},
11
+ year={2022}
12
+ }
13
+ """
14
+
15
+ _DESCRIPTION = """\
16
+ An evaluation benchamrk for infilling tasks on HumanEval dataset for code generation.
17
+ """
18
+
19
+ _SUBSETS = [ "MultiLineInfilling", "SingleLineInfilling", "RandomSpanInfilling", "RandomSpanInfillingLight" ]
20
+
21
+
22
+ class HumanevalConfig(datasets.BuilderConfig):
23
+ """BuilderConfig for HumanevalConfig."""
24
+
25
+ def __init__(
26
+ self,
27
+ subset,
28
+ **kwargs,
29
+ ):
30
+ self.subset = subset
31
+ name = f"HumanEval-{subset}"
32
+ kwargs["name"] = name
33
+ super(HumanevalConfig, self).__init__(**kwargs)
34
+
35
+
36
+ class MultiPLE(datasets.GeneratorBasedBuilder):
37
+ BUILDER_CONFIG_CLASS = HumanevalConfig
38
+
39
+ BUILDER_CONFIGS = [
40
+ HumanevalConfig(
41
+ subset=subset,
42
+ version=datasets.Version("1.0.0"))
43
+ for subset in _SUBSETS
44
+ ]
45
+
46
+ DEFAULT_CONFIG_NAME = "HumanEval-SingleLineInfilling"
47
+
48
+ def _info(self):
49
+ return datasets.DatasetInfo(
50
+ description=_DESCRIPTION,
51
+ license="MIT",
52
+ features = datasets.Features({'task_id': datasets.Value(dtype='string'),
53
+ 'entry_point': datasets.Value(dtype='string'),
54
+ 'prompt': datasets.Value(dtype='string'),
55
+ 'suffix': datasets.Value(dtype='string'),
56
+ 'canonical_solution': datasets.Value(dtype='string'),
57
+ 'test': datasets.Value(dtype='string')}),
58
+ supervised_keys=None,
59
+ homepage="https://github.com/openai/human-eval-infilling",
60
+ citation=_CITATION
61
+ )
62
+
63
+ def _split_generators(self, dl_manager: datasets.DownloadManager):
64
+ files = dl_manager.download(
65
+ f"data/{self.config.name}.jsonl"
66
+ )
67
+ return [
68
+ datasets.SplitGenerator(
69
+ name=datasets.Split.TEST,
70
+ gen_kwargs={
71
+ "filepath": files,
72
+ }
73
+ )
74
+ ]
75
+
76
+ def _generate_examples(self, filepath):
77
+ with open(filepath) as f:
78
+ for id, line in enumerate(f):
79
+ row = json.loads(line)
80
+ yield id, row