Datasets:
ArXiv:
License:
| license: other | |
| This version of the dataset is strictly permitted for use exclusively in conjunction with the review process for the paper with Submission Number 13449. Upon completion of the review process, a de-anonymized version of the dataset will be released under a license similar to that of The Stack, which can be found at https://huggingface.co/datasets/bigcode/the-stack. | |
| ## Dataset Format | |
| The dataset contains 4 different subdataset or configurations in HuggingFace Datasets terminology. Those are `bm25_contexts` `PP_contexts` `randomNN_contexts` and `sources`. | |
| First 3 are data used to train and test Repo fusion and the last one is actual java sourcode files the date was taken from. | |
| The format of the data for firt 3 dataset is as follows: | |
| ``` | |
| features = datasets.Features({ | |
| 'id': datasets.Value('string'), | |
| 'hole_file': datasets.Value('string'), | |
| 'hole_line': datasets.Value('int32'), | |
| 'hole_pos': datasets.Value('int32'), | |
| 'question': datasets.Value('string'), | |
| 'target': datasets.Value('string'), | |
| 'answers': datasets.Sequence( | |
| datasets.Value('string') | |
| ), | |
| 'ctxs': [{ | |
| 'title': datasets.Value('string'), | |
| 'text': datasets.Value('string'), | |
| 'score': datasets.Value('float64') | |
| }] | |
| }) | |
| ``` | |
| The format of the `sources` is either as follows if accessed through Datasets.load_dataset: | |
| ``` | |
| features = datasets.Features({ | |
| 'file': datasets.Value('string'), | |
| 'content': datasets.Value('string') | |
| }) | |
| ``` | |
| Or, it can be accessed via file system directly. The format is like this `[<data_set_root>/data/<split_name>/<github_user>/<repo_name>/<path/to/every/java/file/in/the/repo>.java]` | |
| Therea are 3 splits for each configuration `train`, `test`, `validation` | |
| ## Dataset usage | |
| First, please, clone the dataset locally | |
| ``` | |
| git clone https://huggingface.co/datasets/RepoFusion/Stack-Repo <local/path/to/manual/data> | |
| ``` | |
| Second, please, load the dataset desired configuration and split: | |
| ``` | |
| ds = datasets.load_dataset( | |
| "RepoFusion/Stack-Repo", | |
| name="<configuration_name>", | |
| split="<split_name>" | |
| data_dir="<local/path/to/manual/data>" | |
| ) | |
| ``` | |
| NOTE: `bm25_contexts` `PP_contexts` `randomNN_contexts` configrations can be loaded directly from the hub without cloning the repo locally. For the `sources` if not clonned beforehand or `data_dir` not specified, `ManualDownloadError` will be raised. | |