fernando-peres commited on
Commit
772c39c
·
1 Parent(s): de0ab64

builder created

Browse files
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ env
2
+ /env
.vscode/settings.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "cSpell.words": ["Multiclassification"]
3
+ }
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+
4
+ dataset_info:
5
+ - config_name: raw
6
+ features:
7
+ - name: source_id
8
+ dtype: int64
9
+ - name: doc_source_id
10
+ dtype: int64
11
+ - name: document
12
+ dtype: string
13
+ - name: text
14
+ dtype: string
15
  ---
16
+
17
+ # Paraguay Legislation
18
+
19
+ The Paraguay Legislation dataset is a comprehensive collection of legal documents sourced from the legislative framework of Paraguay. The dataset contains legal documents sourced from the legislative framework of Paraguay, including resolutions, decrees, laws, and other kinds of legislative texts.
20
+
21
+ This dataset has been curated as a valuable resource for Natural Language Processing (NLP) tasks. The data is designed for research focused on text classification tasks. The classification process is divided into two objectives:
22
+
23
+ 1. Binary classification: 0 - no cost and 1 - cost (legislation has costs for the society)
24
+
25
+ 2. Multi-classification: classify the document into several hierarchical categories of costs.
26
+
27
+ For more information about multi-classification definitions, please check this link: <todo: link to>.
28
+
29
+ ## Subsets
30
+
31
+ The dataset contains various subsets, each representing different data quality and preparation stages. Within these subsets, you'll encounter multiple versions of the same data, with variations primarily reflecting differences in data quality, metadata columns, and preprocessing tasks applied to change the data.
32
+
33
+ The subsets are the following:
34
+
35
+ **1. Raw:** Data extracted from the sources files (URls, PDFs and Word files) without any transformation or sentence splitter. It can be helpful because you can access the raw data extracted from the seeds (PDFs and Word files) and apply other preprocessing tasks from this point to prepare the data without returning to extract texts from source files.
36
+
37
+ **2. Sentences:** Normalized data split by sentence, mainly treating issues of text extracted from PDF. This stage also adds metadata about the sentence, for example: if it is a title or not.
38
+
39
+ **3. Sentence Unlabeled:** Unlabeled corpora of Paraguay legislation. This data is prepared to be labeled by the experts. Each instance of the dataset represents a specific text passage, split by its original formatting extracted from raw text (from original documents).
40
+
41
+ **4. Sentence labeled (Ground Truth):** The labeled data is the ground truth data used to train the models. This data is annotated by legal experts indicating the existence of administrative costs (and other types) in the legislation. Each instance of the dataset represents a specific text passage.
42
+
43
+ This dataset has the following data splits:
44
+
45
+ * Training Set: This portion of the data is used to train and fine-tune machine learning models.
46
+
47
+ * Test Set: The test set is reserved for assessing the model's accuracy, generalization, and effectiveness. It remains unseen during training and helps gauge how well the model performs on new, unseen data.
48
+
49
+ Together, these labeled data subsets provide a crucial reference point for building and evaluating models, ensuring they can make informed predictions and classifications with high accuracy and reliability.
config.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+
2
+ LANGUAGE = "ES"
features_specs.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ import datasets
4
+ from obligations import affected_entity, cost_type, aa_categories, aa_categories_unique, io_categories
5
+
6
+ BASIC_FEATURES_SPEC = {
7
+ "source_id": datasets.Value(dtype="int64"),
8
+ "doc_source_id": datasets.Value(dtype="int64"),
9
+ "document": datasets.Value(dtype="string"),
10
+ "text": datasets.Value(dtype="string"),
11
+ }
12
+
13
+ RAW_FEATURES_SPEC = {
14
+ "source_id": datasets.Value(dtype="int64"),
15
+ "doc_source_id": datasets.Value(dtype="int64"),
16
+ "document": datasets.Value(dtype="string"),
17
+ "text": datasets.Value(dtype="string"),
18
+ }
19
+
20
+ SENTENCES_UNLABELED_FEATURES_SPEC = {
21
+ "source_id": datasets.Value(dtype="int64"),
22
+ "doc_source_id": datasets.Value(dtype="int64"),
23
+ "document": datasets.Value(dtype="string"),
24
+ "text": datasets.Value(dtype="string"),
25
+
26
+ #
27
+
28
+ # Categories
29
+ "cost_type": datasets.ClassLabel(names=cost_type,),
30
+ "affected_entity": datasets.ClassLabel(names=affected_entity,),
31
+ "io_categories": datasets.Sequence(datasets.ClassLabel(names=io_categories,)),
32
+ "aa_categories": datasets.Sequence(datasets.ClassLabel(names=aa_categories,)),
33
+ "aa_categories_unique": datasets.Sequence(datasets.ClassLabel(names=aa_categories_unique,)),
34
+ }
obligations.py ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ from config import LANGUAGE
4
+
5
+
6
+ cost_type = ["no_cost", "adm_cost", "direct_cost", "other_cost"]
7
+ cost_type_ES = ["sin_costo", "costo_adm", "costo_directo", "otro_costo"]
8
+ cost_type_PT = ["sem_custo", "custo_adm", "custo_direto", "outro_custo"]
9
+
10
+ affected_entity =["no_affected_ent", "companies", "citizens", "public_adm"]
11
+ affected_entity_ES =["ent_no_afectada", "empresas", "ciudadanos", "adm_publica"]
12
+ affected_entity_PT =["ent_nao_afetada", "empresas", "cidadaos", "adm_publica"]
13
+
14
+ # [i] -
15
+ # [i] IO Categories -
16
+ # [i] -
17
+
18
+ io_categories_PT = [
19
+ "prestacao_info_empresarial_e_fiscal",
20
+ "pedidos_de_licencas_e_outros",
21
+ "registos_e_notificacoes",
22
+ "candidatura_a_subsidios_e_outros",
23
+ "disponibilizacao_de_manuais_e_outros",
24
+ "cooperacao_com_auditorias_e_outros",
25
+ "prestacao_info_a_consumidores",
26
+ "outras_ois"
27
+ ],
28
+
29
+
30
+ io_categories_ES = [
31
+ "prestacion_de_informacion_empresarial_y_fiscal"
32
+ "solicitudes_de_licencias_y_otras"
33
+ "registros_y_notificaciones"
34
+ "solicitud_de_subsidios_y_otras"
35
+ "disponibilidad_de_manuales_y_otras"
36
+ "cooperacion_con_auditorías_y_otras"
37
+ "prestacion_de_informacion_a_consumidores"
38
+ "otras_OIS"
39
+ ]
40
+
41
+ # [i] -
42
+ # [i] IO Categories Unique -
43
+ # [i] -
44
+
45
+ aa_categories_unique_PT = [
46
+ "familiarizacao_com_oi",
47
+ "recolha_e_organizacao_de_info",
48
+ "processamento_de_info",
49
+ "tempos_de_espera",
50
+ "deslocacoes",
51
+ "submissao_de_info",
52
+ "preservacao_de_info"
53
+ ]
54
+
55
+
56
+ aa_categories_unique_ES = [
57
+ "familiarizacion_con_OI"
58
+ "recoleccion_y_organizacion_de_informacion"
59
+ "procesamiento_de_informacion"
60
+ "tiempos_de_espera"
61
+ "desplazamientos"
62
+ "envio_de_informacion"
63
+ "preservacion_de_informacion"
64
+ ]
65
+
66
+ # [i] -
67
+ # [i] AA Categories -
68
+ # [i] -
69
+
70
+
71
+ aa_categories_PT = [
72
+ "aa_1_familiarizacao_com_oi",
73
+ "aa_1_recolha_e_organizacao_de_info",
74
+ "aa_1_processamento_de_info",
75
+ "aa_1_tempos_de_espera",
76
+ "aa_1_deslocacoes",
77
+ "aa_1_submissao_de_info",
78
+ "aa_1_preservacao_de_info",
79
+ "aa_2_familiarizacao_com_oi",
80
+ "aa_2_recolha_e_organizacao_de_info",
81
+ "aa_2_processamento_de_info",
82
+ "aa_2_tempos_de_espera",
83
+ "aa_2_deslocacoes",
84
+ "aa_2_submissao_de_info",
85
+ "aa_2_preservacao_de_info",
86
+ "aa_3_familiarizacao_com_oi",
87
+ "aa_3_recolha_e_organizacao_de_info",
88
+ "aa_3_processamento_de_info",
89
+ "aa_3_tempos_de_espera",
90
+ "aa_3_deslocacoes",
91
+ "aa_3_submissao_de_info",
92
+ "aa_3_preservacao_de_info",
93
+ "aa_4_familiarizacao_com_oi",
94
+ "aa_4_recolha_e_organizacao_de_info",
95
+ "aa_4_processamento_de_info",
96
+ "aa_4_tempos_de_espera",
97
+ "aa_4_deslocacoes",
98
+ "aa_4_submissao_de_info",
99
+ "aa_4_preservacao_de_info",
100
+ "aa_5_familiarizacao_com_oi",
101
+ "aa_5_recolha_e_organizacao_de_info",
102
+ "aa_5_processamento_de_info",
103
+ "aa_5_tempos_de_espera",
104
+ "aa_5_deslocacoes",
105
+ "aa_5_submissao_de_info",
106
+ "aa_5_preservacao_de_info",
107
+ "aa_6_familiarizacao_com_oi",
108
+ "aa_6_recolha_e_organizacao_de_info",
109
+ "aa_6_processamento_de_info",
110
+ "aa_6_tempos_de_espera",
111
+ "aa_6_deslocacoes",
112
+ "aa_6_submissao_de_info",
113
+ "aa_6_preservacao_de_info",
114
+ "aa_7_familiarizacao_com_oi",
115
+ "aa_7_recolha_e_organizacao_de_info",
116
+ "aa_7_processamento_de_info",
117
+ "aa_7_tempos_de_espera",
118
+ "aa_7_deslocacoes",
119
+ "aa_7_submissao_de_info",
120
+ "aa_7_preservacao_de_info"
121
+ ]
122
+
123
+ aa_categories_ES = [
124
+ "aa_1_familiarizacion_con_OI"
125
+ "aa_1_recoleccion_y_organizacion_de_informacion"
126
+ "aa_1_procesamiento_de_informacion"
127
+ "aa_1_tiempos_de_espera"
128
+ "aa_1_desplazamientos"
129
+ "aa_1_envio_de_informacion"
130
+ "aa_1_preservacion_de_informacion"
131
+ "aa_2_familiarizacion_con_OI"
132
+ "aa_2_recoleccion_y_organizacion_de_informacion"
133
+ "aa_2_procesamiento_de_informacion"
134
+ "aa_2_tiempos_de_espera"
135
+ "aa_2_desplazamientos"
136
+ "aa_2_envio_de_informacion"
137
+ "aa_2_preservacion_de_informacion"
138
+ "aa_3_familiarizacion_con_OI"
139
+ "aa_3_recoleccion_y_organizacion_de_informacion"
140
+ "aa_3_procesamiento_de_informacion"
141
+ "aa_3_tiempos_de_espera"
142
+ "aa_3_desplazamientos"
143
+ "aa_3_envio_de_informacion"
144
+ "aa_3_preservacion_de_informacion"
145
+ "aa_4_familiarizacion_con_OI"
146
+ "aa_4_recoleccion_y_organizacion_de_informacion"
147
+ "aa_4_procesamiento_de_informacion"
148
+ "aa_4_tiempos_de_espera"
149
+ "aa_4_desplazamientos"
150
+ "aa_4_envio_de_informacion"
151
+ "aa_4_preservacion_de_informacion"
152
+ "aa_5_familiarizacion_con_OI"
153
+ "aa_5_recoleccion_y_organizacion_de_informacion"
154
+ "aa_5_procesamiento_de_informacion"
155
+ "aa_5_tiempos_de_espera"
156
+ "aa_5_desplazamientos"
157
+ "aa_5_envio_de_informacion"
158
+ "aa_5_preservacion_de_informacion"
159
+ "aa_6_familiarizacion_con_OI"
160
+ "aa_6_recoleccion_y_organizacion_de_informacion"
161
+ "aa_6_procesamiento_de_informacion"
162
+ "aa_6_tiempos_de_espera"
163
+ "aa_6_desplazamientos"
164
+ "aa_6_envio_de_informacion"
165
+ "aa_6_preservacion_de_informacion"
166
+ "aa_7_familiarizacion_con_OI"
167
+ "aa_7_recoleccion_y_organizacion_de_informacion"
168
+ "aa_7_procesamiento_de_informacion"
169
+ "aa_7_tiempos_de_espera"
170
+ "aa_7_desplazamientos"
171
+ "aa_7_envio_de_informacion"
172
+ "aa_7_preservacion_de_informacion"
173
+ ]
174
+
175
+
176
+ io_categories = [],
177
+ aa_categories_unique = []
178
+ aa_categories = []
179
+
180
+
181
+
182
+ if LANGUAGE == "ES":
183
+ io_categories = io_categories_ES,
184
+ aa_categories_unique = aa_categories_unique_ES
185
+ aa_categories = aa_categories_ES
186
+ cost_type = cost_type_ES
187
+ affected_entity = affected_entity_ES
188
+ elif LANGUAGE == "PT":
189
+ io_categories = io_categories_PT,
190
+ aa_categories_unique = aa_categories_unique_PT
191
+ aa_categories = aa_categories_PT
192
+ cost_type = cost_type_PT
193
+ affected_entity = affected_entity_PT
py_legislation.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ """
10
+
11
+ Paraguay Legislation Dataset Builder
12
+ class PY_Legislation(datasets.GeneratorBasedBuilder)
13
+
14
+ Defines the implementation of Paraguay Legislation dataset builder (GeneratorBasedBuilder).
15
+
16
+ """
17
+
18
+ from textwrap import TextWrapper
19
+ import datasets
20
+ import pyarrow.parquet as pq
21
+
22
+ from features_specs import BASIC_FEATURES_SPEC, RAW_FEATURES_SPEC, SENTENCES_UNLABELED_FEATURES_SPEC
23
+ from py_legislation_metadata import PY_LEGISLATION_METADATA
24
+
25
+
26
+ class PY_legislation(datasets.GeneratorBasedBuilder):
27
+ VERSION = datasets.Version("1.0.0")
28
+
29
+ BUILDER_CONFIGS = [
30
+ datasets.BuilderConfig(
31
+ name="raw", version=VERSION,
32
+ description=PY_LEGISLATION_METADATA["raw-description"],
33
+ ),
34
+ datasets.BuilderConfig(
35
+ name="sentences_unlabeled", version=VERSION,
36
+ description=PY_LEGISLATION_METADATA["sentences-unlabeled-description"],
37
+ ),
38
+ datasets.BuilderConfig(
39
+ name="sentences_labeled", version=VERSION,
40
+ description=PY_LEGISLATION_METADATA["sentences-labeled-description"],
41
+ ),
42
+ ]
43
+
44
+ # [i] Info
45
+ def _info(self):
46
+ """
47
+ This method specifies the datasets.DatasetInfo object which contains
48
+ information and typings for the dataset
49
+ """
50
+ features = None
51
+ description = ""
52
+
53
+ if self.config.name == "raw":
54
+ description = PY_LEGISLATION_METADATA["raw-description"]
55
+ features = datasets.Features(RAW_FEATURES_SPEC)
56
+
57
+ if self.config.name == "sentences_unlabeled":
58
+ description = PY_LEGISLATION_METADATA["sentences-unlabeled-description"]
59
+ features = datasets.Features(SENTENCES_UNLABELED_FEATURES_SPEC)
60
+
61
+ if self.config.name == "sentences_labeled":
62
+ description = PY_LEGISLATION_METADATA["sentences-labeled-description"]
63
+
64
+ else:
65
+ features = datasets.Features(
66
+ BASIC_FEATURES_SPEC
67
+ )
68
+
69
+ return datasets.DatasetInfo(
70
+ description=description,
71
+ features=features,
72
+ homepage=PY_LEGISLATION_METADATA["homepage"],
73
+ license=PY_LEGISLATION_METADATA["license"],
74
+ citation=PY_LEGISLATION_METADATA["citation"],
75
+ )
76
+
77
+ def _split_generators(self, dl_manager):
78
+ urls = PY_LEGISLATION_METADATA["urls"][self.config.name]
79
+
80
+ urls = dl_manager.download_and_extract(urls)
81
+
82
+ return [
83
+ datasets.SplitGenerator(
84
+ name=datasets.Split.TRAIN,
85
+ gen_kwargs={"filepath": urls},
86
+ ),
87
+ ]
88
+
89
+ def _generate_examples(self, filepath):
90
+ """
91
+ This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
92
+ The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
93
+ Obs: method parameters are unpacked from `gen_kwargs` as given in `_split_generators`.
94
+ """
95
+ pq_table = pq.read_table(filepath)
96
+ for i in range(len(pq_table)):
97
+ yield i, {
98
+ col_name: pq_table[col_name][i].as_py()
99
+ for col_name in pq_table.column_names
100
+ }
101
+
py_legislation_metadata.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ import textwrap
4
+ import datasets
5
+
6
+
7
+
8
+
9
+ # [i] -
10
+ # [i] GENERAL DESCRIPTIONS -
11
+ # [i] -
12
+
13
+ PY_LEGISLATION_METADATA = {
14
+ "citation" : """\
15
+ @InProceedings{
16
+ huggingface:dataset,
17
+ title = {Paraguay Legislation Dataset},
18
+ author={Peres, Fernando; Costa, Victor},
19
+ year={2023}
20
+ }
21
+ """,
22
+
23
+ "description" :textwrap.dedent("""\
24
+ Dataset for researching - NLP techniques on PARAGUAY legislation.
25
+ """),
26
+
27
+ "homepage" : "https://www.leyes.com.py/",
28
+
29
+ "license" : "apache-2.0",
30
+
31
+ "urls" : {
32
+ "raw": "./data/0_raw/raw.parquet",
33
+ "sentences_unlabeled": "./data/1_sentences_unlabeled/unlabeled.parquet",
34
+ "sentences_labeled": "./data/2_sentences_labeled/labeled.parquet",
35
+ },
36
+
37
+
38
+ "raw-description" : textwrap.dedent("""
39
+ Data extracted from the sources files (URls, PDFs and Word files) without any transformation or sentence splitter. It can be helpful because you can access the raw data extracted from the seeds (PDFs and Word files) and apply other preprocessing tasks from this point to prepare the data without returning to extract texts from source files.
40
+ """),
41
+
42
+ "sentences-unlabeled-description" : textwrap.dedent("""
43
+ Unlabeled corpora of Paraguay legislation. This data is prepared to be labeled by the experts. Each instance of the dataset represents a specific text passage, split by its original formatting extracted from raw text (from original documents)
44
+
45
+ Each observation of the dataset represents a specific text passage.
46
+ """),
47
+
48
+ "sentences-labeled-description" : textwrap.dedent("""
49
+ The labeled data is the ground truth data used to train the models. This data is annotated by legal experts indicating the existence of administrative costs (and other types) in the legislation.
50
+
51
+ Each observation of the dataset represents a specific text passage.
52
+ """),
53
+ }
54
+
55
+
56
+ x = {
57
+ "config_names" : {
58
+ "raw": {
59
+ "description" : "",
60
+ "features" : {
61
+ "source_id": datasets.Value(dtype="int64"),
62
+ "doc_source_id": datasets.Value(dtype="int64"),
63
+ "document": datasets.Value(dtype="string"),
64
+ "text": datasets.Value(dtype="string"),
65
+ }
66
+ }
67
+ }
68
+ }
69
+
70
+ # x["config_names"]["raw"]["description"]
requirements.txt ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ aiohttp==3.8.5
2
+ aiosignal==1.3.1
3
+ async-timeout==4.0.3
4
+ attrs==23.1.0
5
+ certifi==2023.7.22
6
+ charset-normalizer==3.3.0
7
+ datasets==2.14.5
8
+ dill==0.3.7
9
+ filelock==3.12.4
10
+ frozenlist==1.4.0
11
+ fsspec==2023.6.0
12
+ huggingface-hub==0.17.3
13
+ idna==3.4
14
+ multidict==6.0.4
15
+ multiprocess==0.70.15
16
+ numpy==1.26.0
17
+ packaging==23.2
18
+ pandas==2.1.1
19
+ pyarrow==13.0.0
20
+ python-dateutil==2.8.2
21
+ pytz==2023.3.post1
22
+ PyYAML==6.0.1
23
+ requests==2.31.0
24
+ six==1.16.0
25
+ tqdm==4.66.1
26
+ typing_extensions==4.8.0
27
+ tzdata==2023.3
28
+ urllib3==2.0.5
29
+ xxhash==3.3.0
30
+ yarl==1.9.2