Datasets:
Tasks:
Question Answering
Modalities:
Text
Sub-tasks:
extractive-qa
Languages:
code
Size:
100K - 1M
License:
Update README
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ task_ids:
|
|
| 25 |
- extractive-qa
|
| 26 |
---
|
| 27 |
|
| 28 |
-
# Dataset Card for
|
| 29 |
|
| 30 |
## Table of Contents
|
| 31 |
- [Table of Contents](#table-of-contents)
|
|
@@ -54,43 +54,60 @@ task_ids:
|
|
| 54 |
|
| 55 |
## Dataset Description
|
| 56 |
|
| 57 |
-
- **Homepage:**
|
| 58 |
- **Repository:** [Code repo](https://github.com/adityakanade/natural-cubert/)
|
|
|
|
| 59 |
- **Paper:**
|
| 60 |
|
| 61 |
### Dataset Summary
|
| 62 |
|
| 63 |
CodeQueries allows to explore extractive question-answering methodology over code
|
| 64 |
-
by providing semantic queries as question and answer
|
| 65 |
-
complex concepts and long chains of reasoning.
|
| 66 |
|
| 67 |
### Supported Tasks and Leaderboards
|
| 68 |
|
| 69 |
-
|
| 70 |
|
| 71 |
### Languages
|
| 72 |
|
| 73 |
-
The
|
| 74 |
|
| 75 |
## Dataset Structure
|
| 76 |
|
| 77 |
-
###
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
```
|
| 82 |
|
| 83 |
-
### Data Fields
|
|
|
|
| 84 |
|
| 85 |
-
|
|
|
|
| 86 |
- query_name (query name to uniquely identify the query)
|
| 87 |
-
- context_blocks (code blocks supplied as input to the model for prediction)
|
| 88 |
-
- answer_spans (code in answer spans)
|
| 89 |
-
- supporting_fact_spans (code in supporting-fact spans)
|
| 90 |
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
|
| 91 |
-
-
|
| 92 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
- label_sequence (example subtoken labels)
|
|
|
|
|
|
|
| 94 |
|
| 95 |
### Data Splits
|
| 96 |
|
|
@@ -104,62 +121,115 @@ All splits of all settings have same format. An example looks as follows -
|
|
| 104 |
|
| 105 |
## Dataset Creation
|
| 106 |
|
| 107 |
-
|
| 108 |
|
| 109 |
-
[More Information Needed]
|
| 110 |
|
| 111 |
-
###
|
| 112 |
|
| 113 |
-
|
|
|
|
|
|
|
| 114 |
|
| 115 |
[More Information Needed]
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
-
[
|
| 120 |
|
| 121 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 122 |
|
| 123 |
-
|
| 124 |
|
| 125 |
-
|
|
|
|
|
|
|
| 126 |
|
| 127 |
-
|
| 128 |
|
| 129 |
-
|
|
|
|
| 130 |
|
| 131 |
-
###
|
| 132 |
|
| 133 |
-
|
| 134 |
|
| 135 |
-
|
| 136 |
|
| 137 |
-
|
| 138 |
|
| 139 |
-
|
| 140 |
|
| 141 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
-
|
|
|
|
| 144 |
|
| 145 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 146 |
|
| 147 |
-
[More Information Needed]
|
| 148 |
|
| 149 |
-
##
|
| 150 |
|
| 151 |
-
|
| 152 |
|
| 153 |
-
[More Information Needed]
|
| 154 |
|
|
|
|
| 155 |
### Licensing Information
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
### Citation Information
|
| 160 |
|
| 161 |
[More Information Needed]
|
| 162 |
-
|
| 163 |
-
### Contributions
|
| 164 |
-
|
| 165 |
-
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
|
|
|
| 25 |
- extractive-qa
|
| 26 |
---
|
| 27 |
|
| 28 |
+
# Dataset Card for Codequeries
|
| 29 |
|
| 30 |
## Table of Contents
|
| 31 |
- [Table of Contents](#table-of-contents)
|
|
|
|
| 54 |
|
| 55 |
## Dataset Description
|
| 56 |
|
| 57 |
+
- **Homepage:** [Codequeires](https://huggingface.co/datasets/thepurpleowl/codequeries)
|
| 58 |
- **Repository:** [Code repo](https://github.com/adityakanade/natural-cubert/)
|
| 59 |
+
- **Leaderboard:** [Code repo](https://github.com/adityakanade/natural-cubert/)
|
| 60 |
- **Paper:**
|
| 61 |
|
| 62 |
### Dataset Summary
|
| 63 |
|
| 64 |
CodeQueries allows to explore extractive question-answering methodology over code
|
| 65 |
+
by providing semantic natural language queries as question and code spans as answer or supporting fact. Given a query, finding the answer/supporting fact spans in code context involves analysis complex concepts and long chains of reasoning. The dataset is provided with five separate settings; details on the setting can be found in the [paper]().
|
|
|
|
| 66 |
|
| 67 |
### Supported Tasks and Leaderboards
|
| 68 |
|
| 69 |
+
Query comprehension for code, Extractive question answering for code. Refer the [paper]().
|
| 70 |
|
| 71 |
### Languages
|
| 72 |
|
| 73 |
+
The dataset contains code context from `python` files.
|
| 74 |
|
| 75 |
## Dataset Structure
|
| 76 |
|
| 77 |
+
### How to use
|
| 78 |
+
The dataset can directly used with huggingface datasets. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
|
| 79 |
+
```python
|
| 80 |
+
from datasets import load_dataset
|
| 81 |
+
|
| 82 |
+
ds = load_dataset("thepurpleowl/codequeries", "<ideal/file_ideal/prefix/twostep>", split="train")
|
| 83 |
+
print(next(iter(ds)))
|
| 84 |
+
#OUTPUT:
|
| 85 |
+
{
|
| 86 |
+
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
|
| 87 |
+
'repo_name': 'MirekSz/webpack-es6-ts',
|
| 88 |
+
'path': 'app/mods/mod190.js',
|
| 89 |
+
'language': 'JavaScript',
|
| 90 |
+
'license': 'isc',
|
| 91 |
+
'size': 73
|
| 92 |
+
}
|
| 93 |
```
|
| 94 |
|
| 95 |
+
### Data Splits and Data Fields
|
| 96 |
+
Detailed information on the data splits for proposed settings can be found in the paper.
|
| 97 |
|
| 98 |
+
In general, data splits in all prpoposed settings have examples in following fields -
|
| 99 |
+
```
|
| 100 |
- query_name (query name to uniquely identify the query)
|
|
|
|
|
|
|
|
|
|
| 101 |
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
|
| 102 |
+
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field]
|
| 103 |
+
- answer_spans (answer spans with metadata)
|
| 104 |
+
- supporting_fact_spans (supporting-fact spans with metadata)
|
| 105 |
+
- example_type (1(positive)) or 0(negative)) example type)
|
| 106 |
+
- single_hop (True or False - for query type)
|
| 107 |
+
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
|
| 108 |
- label_sequence (example subtoken labels)
|
| 109 |
+
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block)
|
| 110 |
+
```
|
| 111 |
|
| 112 |
### Data Splits
|
| 113 |
|
|
|
|
| 121 |
|
| 122 |
## Dataset Creation
|
| 123 |
|
| 124 |
+
The dataset is created by using [ETH Py150 Open corpus](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get natural language queries and corresponding answer/supporting spans in ETH Py150 Open corpus files, CodeQL was used.
|
| 125 |
|
|
|
|
| 126 |
|
| 127 |
+
### Licensing Information
|
| 128 |
|
| 129 |
+
Codequeries dataset is licensed under the [Apache-2.0](https://opensource.org/licenses/Apache-2.0) License.
|
| 130 |
+
|
| 131 |
+
### Citation Information
|
| 132 |
|
| 133 |
[More Information Needed]
|
| 134 |
|
| 135 |
+
### Contributions
|
| 136 |
|
| 137 |
+
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.# Dataset Card for Codequeries
|
| 138 |
|
| 139 |
+
## Table of Contents
|
| 140 |
+
- [Table of Contents](#table-of-contents)
|
| 141 |
+
- [Dataset Description](#dataset-description)
|
| 142 |
+
- [Dataset Summary](#dataset-summary)
|
| 143 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 144 |
+
- [Languages](#languages)
|
| 145 |
+
- [Dataset Structure](#dataset-structure)
|
| 146 |
+
- [How to use](#how-to-use)
|
| 147 |
+
- [Data Splits and Data Fields](#data-splits-and-data-fields)
|
| 148 |
+
- [Dataset Creation](#dataset-creation)
|
| 149 |
+
- [Additional Information](#additional-information)
|
| 150 |
+
- [Licensing Information](#licensing-information)
|
| 151 |
+
- [Citation Information](#citation-information)
|
| 152 |
|
| 153 |
+
## Dataset Description
|
| 154 |
|
| 155 |
+
- **Homepage:** [Codequeires](https://huggingface.co/datasets/thepurpleowl/codequeries)
|
| 156 |
+
- **Repository:** [Code repo](https://github.com/adityakanade/natural-cubert/)
|
| 157 |
+
- **Paper:**
|
| 158 |
|
| 159 |
+
### Dataset Summary
|
| 160 |
|
| 161 |
+
CodeQueries allows to explore extractive question-answering methodology over code
|
| 162 |
+
by providing semantic natural language queries as question and code spans as answer or supporting fact. Given a query, finding the answer/supporting fact spans in code context involves analysis complex concepts and long chains of reasoning. The dataset is provided with five separate settings; details on the setting can be found in the [paper]().
|
| 163 |
|
| 164 |
+
### Supported Tasks and Leaderboards
|
| 165 |
|
| 166 |
+
Query comprehension for code, Extractive question answering for code.
|
| 167 |
|
| 168 |
+
### Languages
|
| 169 |
|
| 170 |
+
The dataset contains code context from `python` files.
|
| 171 |
|
| 172 |
+
## Dataset Structure
|
| 173 |
|
| 174 |
+
### How to use
|
| 175 |
+
The dataset can directly used with huggingface datasets. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
|
| 176 |
+
```python
|
| 177 |
+
import datasets
|
| 178 |
+
|
| 179 |
+
# instead `twostep`, other settings are <ideal/file_ideal/prefix>.
|
| 180 |
+
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST)
|
| 181 |
+
print(next(iter(ds)))
|
| 182 |
+
|
| 183 |
+
#OUTPUT:
|
| 184 |
+
{'query_name': 'Unused import',
|
| 185 |
+
'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py',
|
| 186 |
+
'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...',
|
| 187 |
+
'metadata': 'root',
|
| 188 |
+
'header': "['module', '___EOS___']",
|
| 189 |
+
'index': 0},
|
| 190 |
+
'answer_spans': [{'span': 'from glance.common import context',
|
| 191 |
+
'start_line': 19,
|
| 192 |
+
'start_column': 0,
|
| 193 |
+
'end_line': 19,
|
| 194 |
+
'end_column': 33}
|
| 195 |
+
],
|
| 196 |
+
'supporting_fact_spans': [],
|
| 197 |
+
'example_type': 1,
|
| 198 |
+
'single_hop': False,
|
| 199 |
+
'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...],
|
| 200 |
+
'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...],
|
| 201 |
+
'relevance_label': 1
|
| 202 |
+
}
|
| 203 |
+
```
|
| 204 |
|
| 205 |
+
### Data Splits and Data Fields
|
| 206 |
+
Detailed information on the data splits for proposed settings can be found in the paper.
|
| 207 |
|
| 208 |
+
In general, data splits in all prpoposed settings have examples in following fields -
|
| 209 |
+
```
|
| 210 |
+
- query_name (query name to uniquely identify the query)
|
| 211 |
+
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
|
| 212 |
+
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`]
|
| 213 |
+
- answer_spans (answer spans with metadata)
|
| 214 |
+
- supporting_fact_spans (supporting-fact spans with metadata)
|
| 215 |
+
- example_type (1(positive)) or 0(negative)) example type)
|
| 216 |
+
- single_hop (True or False - for query type)
|
| 217 |
+
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
|
| 218 |
+
- label_sequence (example subtoken labels)
|
| 219 |
+
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field]
|
| 220 |
+
```
|
| 221 |
|
|
|
|
| 222 |
|
| 223 |
+
## Dataset Creation
|
| 224 |
|
| 225 |
+
The dataset is created by using [ETH Py150 Open corpus](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get natural language queries and corresponding answer/supporting spans in ETH Py150 Open corpus files, CodeQL was used.
|
| 226 |
|
|
|
|
| 227 |
|
| 228 |
+
## Additional Information
|
| 229 |
### Licensing Information
|
| 230 |
|
| 231 |
+
Codequeries dataset is licensed under the [Apache-2.0](https://opensource.org/licenses/Apache-2.0) License.
|
| 232 |
|
| 233 |
### Citation Information
|
| 234 |
|
| 235 |
[More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|