Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Catalan
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
3ae62b1
·
1 Parent(s): a35fb66

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,38 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
38
- *.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,159 +0,0 @@
1
- ---
2
-
3
- annotations_creators:
4
- - expert-generated
5
- language_creators:
6
- - found
7
- language:
8
- - ca
9
- license:
10
- - cc-by-sa-4.0
11
- multilinguality:
12
- - monolingual
13
- pretty_name: catalanqa
14
- size_categories:
15
- - 1K<n<10K
16
- source_datasets:
17
- - original
18
- task_categories:
19
- - question-answering
20
- task_ids:
21
- - extractive-qa
22
-
23
- ---
24
- ## Table of Contents
25
- - [Table of Contents](#table-of-contents)
26
- - [Dataset Description](#dataset-description)
27
- - [Dataset Summary](#dataset-summary)
28
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
- - [Languages](#languages)
30
- - [Dataset Structure](#dataset-structure)
31
- - [Data Instances](#data-instances)
32
- - [Data Fields](#data-fields)
33
- - [Data Splits](#data-splits)
34
- - [Dataset Creation](#dataset-creation)
35
- - [Curation Rationale](#curation-rationale)
36
- - [Source Data](#source-data)
37
- - [Annotations](#annotations)
38
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
- - [Considerations for Using the Data](#considerations-for-using-the-data)
40
- - [Social Impact of Dataset](#social-impact-of-dataset)
41
- - [Discussion of Biases](#discussion-of-biases)
42
- - [Other Known Limitations](#other-known-limitations)
43
- - [Additional Information](#additional-information)
44
- - [Dataset Curators](#dataset-curators)
45
- - [Licensing Information](#licensing-information)
46
- - [Citation Information](#citation-information)
47
- - [Contributions](#contributions)
48
-
49
- # Dataset Card for CatalanQA
50
-
51
- ## Dataset Description
52
- - **Homepage:** https://github.com/projecte-aina
53
- - **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
54
-
55
- ### Dataset Summary
56
-
57
- This dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) and [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad).
58
-
59
- Splits have been balanced by kind of question, and unlike other datasets like [SQuAD](http://arxiv.org/abs/1606.05250), it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
60
-
61
- This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
62
-
63
- ### Supported Tasks and Leaderboards
64
- Extractive-QA, Language Model.
65
-
66
- ### Languages
67
- The dataset is in Catalan (`ca-CA`).
68
-
69
- ## Dataset Structure
70
- ### Data Instances
71
- ```
72
- {
73
- "title": "Els 521 policies espanyols amb més mala nota a les oposicions seran enviats a Catalunya",
74
- "paragraphs": [
75
- {
76
- "context": "El Ministeri d'Interior espanyol enviarà a Catalunya els 521 policies espanyols que han obtingut més mala nota a les oposicions. Segons que explica El País, hi havia mig miler de places vacants que s'havien de cobrir, però els agents amb més bones puntuacions han elegit destinacions diferents. En total van aprovar les oposicions 2.600 aspirants. D'aquests, en seran destinats al Principat 521 dels 560 amb més mala nota. Per l'altra banda, entre els 500 agents amb més bona nota, només 8 han triat Catalunya. Fonts de la policia espanyola que esmenta el diari ho atribueixen al procés d'independència, al Primer d'Octubre i a la 'situació social' que se'n deriva.",
77
- "qas": [
78
- {
79
- "question": "Quants policies enviaran a Catalunya?",
80
- "id": "0.5961700408283691",
81
- "answers": [
82
- {
83
- "text": "521",
84
- "answer_start": 57
85
- }
86
- ]
87
- }
88
- ]
89
- }
90
- ]
91
- },
92
- ```
93
-
94
- ### Data Fields
95
- Follows [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets:
96
-
97
- - `id` (str): Unique ID assigned to the question.
98
- - `title` (str): Title of the article.
99
- - `context` (str): Article text.
100
- - `question` (str): Question.
101
- - `answers` (list): Answer to the question, containing:
102
- - `text` (str): Span text answering to the question.
103
- - `answer_start` Starting offset of the span text answering to the question.
104
-
105
- ### Data Splits
106
- - train.json: 17135 question/answer pairs
107
- - dev.json: 2157 question/answer pairs
108
- - test.json: 2135 question/answer pairs
109
-
110
- ## Dataset Creation
111
- ### Curation Rationale
112
-
113
- We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
114
-
115
- ### Source Data
116
- - [VilaWeb](https://www.vilaweb.cat/) and [Catalan Wikipedia](https://ca.wikipedia.org).
117
-
118
- #### Initial Data Collection and Normalization
119
- This dataset is a balanced aggregation from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
120
-
121
- #### Who are the source language producers?
122
- Volunteers from [Catalan Wikipedia](https://ca.wikipedia.org) and professional journalists from [VilaWeb](https://www.vilaweb.cat/).
123
-
124
- ### Annotations
125
- #### Annotation process
126
- We did an aggregation and balancing from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
127
-
128
- To annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250).
129
-
130
- For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
131
-
132
- #### Who are the annotators?
133
- Annotation was commissioned by a specialized company that hired a team of native language speakers.
134
-
135
- ### Personal and Sensitive Information
136
- No personal or sensitive information is included.
137
-
138
- ## Considerations for Using the Data
139
- ### Social Impact of Dataset
140
- We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
141
-
142
- ### Discussion of Biases
143
- [N/A]
144
-
145
- ### Other Known Limitations
146
- [N/A]
147
-
148
- ## Additional Information
149
- ### Dataset Curators
150
- Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
151
-
152
- This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
153
-
154
- ### Licensing Information
155
- This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
156
-
157
- ### Contributions
158
-
159
- [N/A]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
catalanqa.py DELETED
@@ -1,110 +0,0 @@
1
- """CatalanQA Dataset."""
2
- # Loading script for the CatalanQA dataset.
3
- import json
4
-
5
- import datasets
6
-
7
- logger = datasets.logging.get_logger(__name__)
8
-
9
- _CITATION = """\
10
- None
11
- """
12
-
13
- _DESCRIPTION = """\
14
- CatalanQA: an extractive QA dataset from original Catalan Sources: Wikipedia and VilaWeb newswire.
15
-
16
- It is an aggregation and balancing of 2 previous datasets: VilaQUAD and ViquiQUAD, which were described in
17
-
18
- This dataset can be used to build extractive-QA and Language Models.
19
-
20
- Splts have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
21
-
22
- - test.json contains 2135 question/answer pairs
23
-
24
- - train.json contains 17135 question/answer pairs
25
-
26
- - dev.json contains 2157 question/answer pairs
27
-
28
- Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
29
- and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).
30
- """
31
-
32
- _HOMEPAGE = ""
33
-
34
- _URL = "https://huggingface.co/datasets/projecte-aina/catalanqa/resolve/main/"
35
- _TRAINING_FILE = "train.json"
36
- _DEV_FILE = "dev.json"
37
- _TEST_FILE = "test.json"
38
-
39
-
40
- class CatalanQA(datasets.GeneratorBasedBuilder):
41
- """CatalanQA Dataset."""
42
-
43
- VERSION = datasets.Version("1.0.1")
44
-
45
- def _info(self):
46
- return datasets.DatasetInfo(
47
- description=_DESCRIPTION,
48
- features=datasets.Features(
49
- {
50
- "id": datasets.Value("string"),
51
- "title": datasets.Value("string"),
52
- "context": datasets.Value("string"),
53
- "question": datasets.Value("string"),
54
- "answers": [
55
- {
56
- "text": datasets.Value("string"),
57
- "answer_start": datasets.Value("int32"),
58
- }
59
- ],
60
- }
61
- ),
62
- # No default supervised_keys (as we have to pass both question
63
- # and context as input).
64
- supervised_keys=None,
65
- homepage=_HOMEPAGE,
66
- citation=_CITATION,
67
- )
68
-
69
- def _split_generators(self, dl_manager):
70
- """Returns SplitGenerators."""
71
- urls_to_download = {
72
- "train": f"{_URL}{_TRAINING_FILE}",
73
- "dev": f"{_URL}{_DEV_FILE}",
74
- "test": f"{_URL}{_TEST_FILE}",
75
- }
76
- downloaded_files = dl_manager.download(urls_to_download)
77
-
78
- return [
79
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
80
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
81
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
82
- ]
83
-
84
- def _generate_examples(self, filepath):
85
- """This function returns the examples in the raw (text) form."""
86
- logger.info("generating examples from = %s", filepath)
87
- with open(filepath, encoding="utf-8") as f:
88
- catalanqa = json.load(f)
89
- for article in catalanqa["data"]:
90
- title = article.get("title", "").strip()
91
- for paragraph in article["paragraphs"]:
92
- context = paragraph["context"].strip()
93
- for qa in paragraph["qas"]:
94
- question = qa["question"].strip()
95
- id_ = qa["id"]
96
- # answer_starts = [answer["answer_start"] for answer in qa["answers"]]
97
- # answers = [answer["text"].strip() for answer in qa["answers"]]
98
- text = qa["answers"][0]["text"]
99
- answer_start = qa["answers"][0]["answer_start"]
100
-
101
- # Features currently used are "context", "question", and "answers".
102
- # Others are extracted here for the ease of future expansions.
103
- yield id_, {
104
- "title": title,
105
- "context": context,
106
- "question": question,
107
- "id": id_,
108
- "answers": [{"text": text, "answer_start": answer_start}],
109
- }
110
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test.json → default/catalanqa-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:733e7247aacda187da67cc1506311aaa8bc3c30401a13ae48df61b1e27af1e57
3
- size 2899131
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8e09c2aaa63530842955b9498947aa47e83ac0b5d81399bdb6f1e3a2d346e96
3
+ size 1364558
train.json → default/catalanqa-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:baecf1ca47cc55f8d17c6e1316016b20f0ed6768cad8e3d7a28d4ac0d7f52d87
3
- size 23437913
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6de12413fa3edd478352d0bfc9aac15dcd3f1998dce63f0d7cc6f54f1296a504
3
+ size 10989505
dev.json → default/catalanqa-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c7c6801fb921e889b0a0b2aa1181659598f2ed074d4fcd4684ba61e557e3420f
3
- size 2951660
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:213b2d35bf91124d462d7a178adbcef904aeed611b77c569235151777e43a343
3
+ size 1403329