Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
Italian
Size:
10K - 100K
License:
| annotations_creators: | |
| - machine-generated | |
| language_creators: | |
| - machine-generated | |
| language: | |
| - it | |
| license: | |
| - unknown | |
| multilinguality: | |
| - monolingual | |
| size_categories: | |
| - unknown | |
| source_datasets: | |
| - extended|squad | |
| task_categories: | |
| - question-answering | |
| task_ids: | |
| - open-domain-qa | |
| - extractive-qa | |
| paperswithcode_id: squad-it | |
| pretty_name: SQuAD-it | |
| language_bcp47: | |
| - it-IT | |
| dataset_info: | |
| features: | |
| - name: id | |
| dtype: string | |
| - name: context | |
| dtype: string | |
| - name: question | |
| dtype: string | |
| - name: answers | |
| sequence: | |
| - name: text | |
| dtype: string | |
| - name: answer_start | |
| dtype: int32 | |
| splits: | |
| - name: train | |
| num_bytes: 50864680 | |
| num_examples: 54159 | |
| - name: test | |
| num_bytes: 7858312 | |
| num_examples: 7609 | |
| download_size: 13797580 | |
| dataset_size: 58722992 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| - split: test | |
| path: data/test-* | |
| # Dataset Card for "squad_it" | |
| ## Table of Contents | |
| - [Dataset Description](#dataset-description) | |
| - [Dataset Summary](#dataset-summary) | |
| - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | |
| - [Languages](#languages) | |
| - [Dataset Structure](#dataset-structure) | |
| - [Data Instances](#data-instances) | |
| - [Data Fields](#data-fields) | |
| - [Data Splits](#data-splits) | |
| - [Dataset Creation](#dataset-creation) | |
| - [Curation Rationale](#curation-rationale) | |
| - [Source Data](#source-data) | |
| - [Annotations](#annotations) | |
| - [Personal and Sensitive Information](#personal-and-sensitive-information) | |
| - [Considerations for Using the Data](#considerations-for-using-the-data) | |
| - [Social Impact of Dataset](#social-impact-of-dataset) | |
| - [Discussion of Biases](#discussion-of-biases) | |
| - [Other Known Limitations](#other-known-limitations) | |
| - [Additional Information](#additional-information) | |
| - [Dataset Curators](#dataset-curators) | |
| - [Licensing Information](#licensing-information) | |
| - [Citation Information](#citation-information) | |
| - [Contributions](#contributions) | |
| ## Dataset Description | |
| - **Homepage:** [https://github.com/crux82/squad-it](https://github.com/crux82/squad-it) | |
| - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| - **Size of downloaded dataset files:** 8.78 MB | |
| - **Size of the generated dataset:** 58.79 MB | |
| - **Total amount of disk used:** 67.57 MB | |
| ### Dataset Summary | |
| SQuAD-it is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset | |
| into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian. | |
| The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. The dataset is | |
| split into training and test sets to support the replicability of the benchmarking of QA systems: | |
| ### Supported Tasks and Leaderboards | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ### Languages | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ## Dataset Structure | |
| ### Data Instances | |
| #### default | |
| - **Size of downloaded dataset files:** 8.78 MB | |
| - **Size of the generated dataset:** 58.79 MB | |
| - **Total amount of disk used:** 67.57 MB | |
| An example of 'train' looks as follows. | |
| ``` | |
| This example was too long and was cropped: | |
| { | |
| "answers": "{\"answer_start\": [243, 243, 243, 243, 243], \"text\": [\"evitare di essere presi di mira dal boicottaggio\", \"evitare di essere pres...", | |
| "context": "\"La crisi ha avuto un forte impatto sulle relazioni internazionali e ha creato una frattura all' interno della NATO. Alcune nazi...", | |
| "id": "5725b5a689a1e219009abd28", | |
| "question": "Perchè le nazioni europee e il Giappone si sono separati dagli Stati Uniti durante la crisi?" | |
| } | |
| ``` | |
| ### Data Fields | |
| The data fields are the same among all splits. | |
| #### default | |
| - `id`: a `string` feature. | |
| - `context`: a `string` feature. | |
| - `question`: a `string` feature. | |
| - `answers`: a dictionary feature containing: | |
| - `text`: a `string` feature. | |
| - `answer_start`: a `int32` feature. | |
| ### Data Splits | |
| | name | train | test | | |
| | ------- | ----: | ---: | | |
| | default | 54159 | 7609 | | |
| ## Dataset Creation | |
| ### Curation Rationale | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ### Source Data | |
| #### Initial Data Collection and Normalization | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| #### Who are the source language producers? | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ### Annotations | |
| #### Annotation process | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| #### Who are the annotators? | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ### Personal and Sensitive Information | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ## Considerations for Using the Data | |
| ### Social Impact of Dataset | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ### Discussion of Biases | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ### Other Known Limitations | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ## Additional Information | |
| ### Dataset Curators | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ### Licensing Information | |
| [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
| ### Citation Information | |
| ``` | |
| @InProceedings{10.1007/978-3-030-03840-3_29, | |
| author="Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto", | |
| editor="Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo", | |
| title="Neural Learning for Question Answering in Italian", | |
| booktitle="AI*IA 2018 -- Advances in Artificial Intelligence", | |
| year="2018", | |
| publisher="Springer International Publishing", | |
| address="Cham", | |
| pages="389--402", | |
| isbn="978-3-030-03840-3" | |
| } | |
| ``` | |
| ### Contributions | |
| Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |