Datasets:

Modalities:
Text
Libraries:
Datasets
License:
parquet-converter commited on
Commit
1649eb1
·
1 Parent(s): 6049f3e

Update parquet files

Browse files
Files changed (4) hide show
  1. .gitattributes +0 -51
  2. README.md +0 -216
  3. UKP_ASPECT.py +0 -147
  4. standard/ukp_aspect-train.parquet +3 -0
.gitattributes DELETED
@@ -1,51 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zst filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,216 +0,0 @@
1
- ---
2
- license: cc-by-nc-3.0
3
- ---
4
- # Dataset Card for UKP ASPECT
5
-
6
- ## Table of Contents
7
- - [Table of Contents](#table-of-contents)
8
- - [Dataset Description](#dataset-description)
9
- - [Dataset Summary](#dataset-summary)
10
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
- - [Languages](#languages)
12
- - [Dataset Structure](#dataset-structure)
13
- - [Data Instances](#data-instances)
14
- - [Data Fields](#data-fields)
15
- - [Data Splits](#data-splits)
16
- - [Dataset Creation](#dataset-creation)
17
- - [Curation Rationale](#curation-rationale)
18
- - [Source Data](#source-data)
19
- - [Annotations](#annotations)
20
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
21
- - [Additional Information](#additional-information)
22
- - [Dataset Curators](#dataset-curators)
23
- - [Licensing Information](#licensing-information)
24
- - [Citation Information](#citation-information)
25
- - [Contributions](#contributions)
26
-
27
- ## Dataset Description
28
-
29
- - **Homepage: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998**
30
- - **Paper: https://aclanthology.org/P19-1054/**
31
- - **Leaderboard: n/a**
32
- - **Point of Contact: data\[at\]ukp.informatik.tu-darmstadt.de**
33
- - **(http://www.ukp.tu-darmstadt.de/)**
34
-
35
- ### Dataset Summary
36
-
37
- The UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper).
38
-
39
- ### Supported Tasks and Leaderboards
40
-
41
- This dataset supports the following tasks:
42
-
43
- * Sentence pair classification
44
- * Topic classification
45
-
46
- ### Languages
47
-
48
- English
49
-
50
- ## Dataset Structure
51
-
52
- ### Data Instances
53
-
54
- Each instance consists of a topic, a pair of sentences, and an argument similarity label.
55
-
56
- ```
57
- {"3d printing";"This could greatly increase the quality of life of those currently living in less than ideal conditions.";"The advent and spread of new technologies, like that of 3D printing can transform our lives in many ways.";"DTORCD"}
58
- ```
59
-
60
- ### Data Fields
61
-
62
- * topic: the topic keywords used to retrieve the documents
63
- * sentence_1: the first sentence of the pair
64
- * sentence_2: the second sentence of the pair
65
- * label: the consolidated crowdsourced gold-standard annotation of the sentence pair (DTORCD, NS, SS, HS)
66
- * Different Topic/Can’t decide (DTORCD): Either one or
67
- both of the sentences belong to a topic different than
68
- the given one, or you can’t understand one or both
69
- sentences. If you choose this option, you need to very
70
- briefly explain, why you chose it (e.g.“The second
71
- sentence is not grammatical”, “The first sentence is
72
- from a different topic” etc.).
73
- * No Similarity (NS): The two arguments belong to the
74
- same topic, but they don’t show any similarity, i.e.
75
- they speak aboutcompletely different aspects of the topic
76
- * Some Similarity (SS): The two arguments belong to the
77
- same topic, showing semantic similarity on a few aspects,
78
- but thecentral message is rather different, or one
79
- argument is way less specific than the other
80
- * High Similarity (HS): The two arguments belong to the
81
- same topic, and they speak about the same aspect, e.g.
82
- using different words
83
-
84
-
85
- ### Data Splits
86
-
87
- The dataset currently does not contain standard data splits.
88
-
89
- ## Dataset Creation
90
-
91
- ### Curation Rationale
92
-
93
- This dataset contains sentence pairs annotated with argument similarity labels that can be used to evaluate argument clustering.
94
-
95
- ### Source Data
96
-
97
- #### Initial Data Collection and Normalization
98
-
99
- The UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText
100
- system (Stab et al., 2018). The ArgumenText
101
- system expects as input an arbitrary topic (query)
102
- and searches a large web crawl for relevant documents.
103
- Finally, it classifies all sentences contained
104
- in the most relevant documents for a given query
105
- into pro, con or non-arguments (with regard to the
106
- given topic).
107
-
108
- We picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision
109
- approach. For each of our 28 topics, we applied
110
- a sampling strategy that picks randomly two pro
111
- or con argument sentences at random, calculates
112
- their similarity using the system by Misra et al.
113
- (2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity
114
- scale. This was repeated until we reached 3,595
115
- arguments pairs, about 130 pairs for each topic.
116
-
117
- #### Who are the source language producers?
118
-
119
- Unidentified contributors to the world wide web.
120
-
121
- ### Annotations
122
-
123
- #### Annotation process
124
-
125
- The argument pairs were annotated on a range
126
- of three degrees of similarity (no, some, and high
127
- similarity) with the help of crowd workers on
128
- the Amazon Mechanical Turk platform. To account for
129
- unrelated pairs due to the sampling process,
130
- crowd workers could choose a fourth option.
131
- We collected seven assignments per pair
132
- and used Multi-Annotator Competence Estimation
133
- (MACE) with a threshold of 1.0 (Hovy et al.,
134
- 2013) to consolidate votes into a gold standard.
135
-
136
- #### Who are the annotators?
137
-
138
- Crowd workers on Amazon Mechanical Turk
139
-
140
- ### Personal and Sensitive Information
141
-
142
- This dataset is fully anonymized.
143
-
144
- ## Additional Information
145
-
146
- You can download the data via:
147
-
148
- ```
149
- from datasets import load_dataset
150
-
151
- dataset = load_dataset("UKPLab/UKP_ASPECT")
152
- ```
153
- Please find more information about the code and how the data was collected in the [paper](https://aclanthology.org/P19-1054/).
154
-
155
- ### Dataset Curators
156
-
157
- Curation is managed by our [data manager](https://www.informatik.tu-darmstadt.de/ukp/research_ukp/ukp_research_data_and_software/ukp_data_and_software.en.jsp) at UKP.
158
-
159
- ### Licensing Information
160
-
161
- [CC-by-NC 3.0](https://creativecommons.org/licenses/by-nc/3.0/)
162
-
163
- ### Citation Information
164
-
165
- Please cite this data using:
166
-
167
- ```
168
- @inproceedings{reimers2019classification,
169
- title={Classification and Clustering of Arguments with Contextualized Word Embeddings},
170
- author={Reimers, Nils and Schiller, Benjamin and Beck, Tilman and Daxenberger, Johannes and Stab, Christian and Gurevych, Iryna},
171
- booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
172
- pages={567--578},
173
- year={2019}
174
- }
175
- ```
176
-
177
- ### Contributions
178
-
179
- Thanks to [@buenalaune](https://github.com/buenalaune) for adding this dataset.
180
-
181
- ## Tags
182
-
183
- annotations_creators:
184
- - crowdsourced
185
-
186
- language:
187
- - en
188
-
189
- language_creators:
190
- - found
191
-
192
- license:
193
- - cc-by-nc-3.0
194
-
195
- multilinguality:
196
- - monolingual
197
-
198
- pretty_name: UKP ASPECT Corpus
199
-
200
- size_categories:
201
- - 1K<n<10K
202
-
203
- source_datasets:
204
- - original
205
-
206
- tags:
207
- - argument pair
208
- - argument similarity
209
-
210
- task_categories:
211
- - text-classification
212
-
213
- task_ids:
214
- - topic-classification
215
- - multi-input-text-classification
216
- - semantic-similarity-classification
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
UKP_ASPECT.py DELETED
@@ -1,147 +0,0 @@
1
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- # TODO: Add description
15
- """TexPrax: Data collected during the project https://texprax.de/ """
16
-
17
-
18
- import csv
19
- import os
20
- import ast
21
- #import json
22
-
23
- import datasets
24
-
25
- # TODO: Add citation
26
- _CITATION = """\
27
- @inproceedings{reimers2019classification,
28
- title={Classification and Clustering of Arguments with Contextualized Word Embeddings},
29
- author={Reimers, Nils and Schiller, Benjamin and Beck, Tilman and Daxenberger, Johannes and Stab, Christian and Gurevych, Iryna},
30
- booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
31
- pages={567--578},
32
- year={2019}
33
- }
34
- """
35
-
36
- # TODO: Add description
37
- _DESCRIPTION = """\
38
- The UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper).
39
- """
40
-
41
- # TODO: Add link
42
- _HOMEPAGE = "https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998"
43
-
44
- # TODO: Add license
45
- _LICENSE = "Creative Commons Attribution-NonCommercial 3.0"
46
-
47
-
48
- # TODO: Add tudatalib urls here!
49
- _URL = "https://tudatalib.ulb.tu-darmstadt.de/bitstream/handle/tudatalib/1998/UKP_ASPECT.zip?sequence=1&isAllowed=y"
50
-
51
- class UKPAspectConfig(datasets.BuilderConfig):
52
- """BuilderConfig for UKP ASPECT."""
53
- def __init__(self, features, data_url, citation, url, label_classes=("False", "True"), **kwargs):
54
- super(UKPAspectConfig, self).__init__(**kwargs)
55
-
56
-
57
- class UKPAspectDataset(datasets.GeneratorBasedBuilder):
58
- """3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper)."""
59
-
60
- VERSION = datasets.Version("1.1.0")
61
-
62
- BUILDER_CONFIGS = [
63
- datasets.BuilderConfig(name="standard", version=VERSION, description="Sentence pairs annotated with argument similarity")
64
- ]
65
-
66
- DEFAULT_CONFIG_NAME = "standard" # It's not mandatory to have a default configuration. Just use one if it make sense.
67
-
68
- def _info(self):
69
- if self.config.name == "standard": # This is the name of the configuration selected in BUILDER_CONFIGS above
70
- features = datasets.Features(
71
- {
72
- # Note: ID consists of <dialog-id_sentence-id_turn-id>
73
- "topic": datasets.Value("string"),
74
- "sentence_1": datasets.Value("string"),
75
- "sentence_2": datasets.Value("string"),
76
- "label": datasets.features.ClassLabel(
77
- names=[
78
- "NS",
79
- "SS",
80
- "DTORCD",
81
- "HS",
82
- ]),
83
- # These are the features of your dataset like images, labels ...
84
- }
85
- )
86
- else:
87
- raise ValueError(f'The only available config is "standard", but "{self.config.name}" was given')
88
- return datasets.DatasetInfo(
89
- # This is the description that will appear on the datasets page.
90
- description=_DESCRIPTION,
91
- # This defines the different columns of the dataset and their types
92
- features=features, # Here we define them above because they are different between the two configurations
93
- # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
94
- # specify them. They'll be used if as_supervised=True in builder.as_dataset.
95
- # supervised_keys=("sentence", "label"),
96
- # Homepage of the dataset for documentation
97
- homepage=_HOMEPAGE,
98
- # License for the dataset if available
99
- license=_LICENSE,
100
- # Citation for the dataset
101
- citation=_CITATION,
102
- )
103
-
104
- def _split_generators(self, dl_manager):
105
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
106
- # TODO: Add your splits. .zip files will automatically be extracted, so you don't have to worry about them.
107
- if self.config.name == "standard":
108
- urls = _URL
109
- data_dir = dl_manager.download_and_extract(urls)
110
- return [
111
- datasets.SplitGenerator(
112
- name=datasets.Split.TRAIN,
113
- # These kwargs will be passed to _generate_examples
114
- gen_kwargs={
115
- "filepath": os.path.join(data_dir, "UKP_ASPECT.tsv"),
116
- "split": "train",
117
- },
118
- )
119
- ]
120
- else:
121
- raise ValueError(f'The only available config is "standard", but "{self.config.name}" was given')
122
-
123
-
124
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
125
- def _generate_examples(self, filepath, split):
126
- # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
127
- # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
128
- with open(filepath, encoding="utf-8") as f:
129
- creader = csv.reader(f, delimiter='\t', quotechar='"')
130
- next(creader) # skip header
131
- for key, row in enumerate(creader):
132
- # TODO: Use the same keys here as in the datasets.Features of _info(self)
133
- if self.config.name == "standard":
134
- topic, sentence_1, sentence_2, label = row
135
-
136
- # Yields examples as (key, example) tuples
137
- yield key, {
138
- "topic": topic,
139
- "sentence_1": sentence_1,
140
- "sentence_2": sentence_2,
141
- "label": label,
142
- }
143
- else:
144
- raise ValueError(f'The only available config is "standard", but "{self.config.name}" was given')
145
-
146
-
147
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
standard/ukp_aspect-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3011a40b71c483ac43a5fb69552fb011450edccc8a6b1555dafc913a4e49eef
3
+ size 256651