Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
Indonesian
Size:
10K - 100K
License:
| language: | |
| - id | |
| license: apache-2.0 | |
| task_categories: | |
| - question-answering | |
| pretty_name: TyDi QA Indonesian | |
| size_categories: | |
| - 100K<n<1M | |
| source_datasets: | |
| - google-research-datasets/tydiqa | |
| configs: | |
| - config_name: primary_task | |
| data_files: | |
| - split: train | |
| path: "train.parquet" | |
| - split: validation | |
| path: "valid.parquet" | |
| # TyDi QA - Indonesian | |
| This is a dataset card for TyDi QA specifically for indonesian subset. | |
| This work only copy from the original dataset https://huggingface.co/datasets/google-research-datasets/tydiqa | |
| ## Dataset Description | |
| ### Dataset Summary | |
| TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. | |
| The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language | |
| expresses -- such that we expect models performing well on this set to generalize across a large number of the languages | |
| in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic | |
| information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but | |
| don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without | |
| the use of translation (unlike MLQA and XQuAD). | |
| ## Dataset Structure | |
| ### Data Instances | |
| #### primary_task | |
| An example of 'validation' looks as follows. | |
| ``` | |
| This example was too long and was cropped: | |
| { | |
| 'annotations': {'minimal_answers_end_byte': array([-1], dtype=int32), | |
| 'minimal_answers_start_byte': array([-1], dtype=int32), | |
| 'passage_answer_candidate_index': array([-1], dtype=int32), | |
| 'yes_no_answer': array(['NONE'], dtype=object)}, | |
| 'document_plaintext': '\n transl.\n\n Ras (dari bahasa Prancis race, yang sendirinya dari ' | |
| 'document_title': 'Ras manusia', | |
| 'document_url': 'https://id.wikipedia.org/wiki/Ras%20manusia', | |
| 'language': 'indonesian', | |
| 'passage_answer_candidates': {'plaintext_end_byte': array([ 659, 843, 1195, 1353, 2125, 3607, 4161, 4508, 4740, | |
| 6530, 7665, 7999, 8561, 9209, 9615, 10690, 11126, 11979, | |
| 12746, 13304, 13486, 15385, 17455, 17505], dtype=int32), | |
| 'plaintext_start_byte': array([ 1, 660, 844, 1196, 1354, 2126, 3608, 4198, 4509, | |
| 4741, 6531, 7666, 8000, 8562, 9249, 9616, 10774, 11127, | |
| 11980, 12747, 13325, 13503, 15400, 17459], dtype=int32)}, | |
| 'question_text': 'berapakah jenis ras yang ada didunia?'} | |
| ``` | |
| ### Data Fields | |
| The data fields are the same among all splits. | |
| #### primary_task | |
| - `passage_answer_candidates`: a dictionary feature containing: | |
| - `plaintext_start_byte`: a `int32` feature. | |
| - `plaintext_end_byte`: a `int32` feature. | |
| - `question_text`: a `string` feature. | |
| - `document_title`: a `string` feature. | |
| - `language`: a `string` feature. | |
| - `annotations`: a dictionary feature containing: | |
| - `passage_answer_candidate_index`: a `int32` feature. | |
| - `minimal_answers_start_byte`: a `int32` feature. | |
| - `minimal_answers_end_byte`: a `int32` feature. | |
| - `yes_no_answer`: a `string` feature. | |
| - `document_plaintext`: a `string` feature. | |
| - `document_url`: a `string` feature. | |
| ``` | |
| @article{tydiqa, | |
| title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, | |
| author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} | |
| year = {2020}, | |
| journal = {Transactions of the Association for Computational Linguistics} | |
| } | |
| ``` | |