Update README.md
Browse files
README.md
CHANGED
|
@@ -11,9 +11,12 @@ size_categories:
|
|
| 11 |
### Dataset Summary
|
| 12 |
|
| 13 |
The Turku WebQA dataset is a Finnish Question-Answer dataset that has been extracted from different CommonCrawl sources (Parsebank, mC4-Fi, CC-Fi).
|
| 14 |
-
|
|
|
|
| 15 |
The codebase as well as the raw data can be found on [GitHub](https://github.com/TurkuNLP/register-qa).
|
| 16 |
|
|
|
|
|
|
|
| 17 |
### Data Fields
|
| 18 |
|
| 19 |
- `source`: a `string` feature. Tells whether the question-answer pair is extracted from Parsebank, mC4-Fi or CC-Fi.
|
|
@@ -21,7 +24,7 @@ The codebase as well as the raw data can be found on [GitHub](https://github.com
|
|
| 21 |
- `question`: a `string` feature.
|
| 22 |
- `answer`: a `string` feature. Can also be None (null).
|
| 23 |
|
| 24 |
-
### Manual
|
| 25 |
|
| 26 |
To get an idea on how good the extracted pairs were, a sample was annotated for noisy artefacts, insufficient answers and missing context.
|
| 27 |
The evaluation showed that there is variation between the different source corpora.
|
|
@@ -34,7 +37,6 @@ The evaluation showed that there is variation between the different source corpo
|
|
| 34 |
| Parsebank (N=22) | 0,23 | 0,14 | 0,07 |
|
| 35 |
|
| 36 |
|
| 37 |
-
|
| 38 |
### Citing
|
| 39 |
|
| 40 |
Citing information coming soon!
|
|
|
|
| 11 |
### Dataset Summary
|
| 12 |
|
| 13 |
The Turku WebQA dataset is a Finnish Question-Answer dataset that has been extracted from different CommonCrawl sources (Parsebank, mC4-Fi, CC-Fi).
|
| 14 |
+
|
| 15 |
+
The dataset has 237,000 question-answer pairs (altogether 290,000 questions, but not all have an answer). The questions with no answers can be discarded by taking out the rows with None (null).
|
| 16 |
The codebase as well as the raw data can be found on [GitHub](https://github.com/TurkuNLP/register-qa).
|
| 17 |
|
| 18 |
+
The extracted question-answer pairs include various topics from the source corpora, some of which are explored in the paper for which the citing information can be found below.
|
| 19 |
+
|
| 20 |
### Data Fields
|
| 21 |
|
| 22 |
- `source`: a `string` feature. Tells whether the question-answer pair is extracted from Parsebank, mC4-Fi or CC-Fi.
|
|
|
|
| 24 |
- `question`: a `string` feature.
|
| 25 |
- `answer`: a `string` feature. Can also be None (null).
|
| 26 |
|
| 27 |
+
### Manual Evalution of the Pairs
|
| 28 |
|
| 29 |
To get an idea on how good the extracted pairs were, a sample was annotated for noisy artefacts, insufficient answers and missing context.
|
| 30 |
The evaluation showed that there is variation between the different source corpora.
|
|
|
|
| 37 |
| Parsebank (N=22) | 0,23 | 0,14 | 0,07 |
|
| 38 |
|
| 39 |
|
|
|
|
| 40 |
### Citing
|
| 41 |
|
| 42 |
Citing information coming soon!
|