ChristianHugD commited on
Commit
322d0ae
·
verified ·
1 Parent(s): 8f93135

Upload initial dataset files from '/home/christian/Desktop/Uni/Bachelor-Thesis/datasets/because'

Browse files
Files changed (3) hide show
  1. README.md +223 -0
  2. because_test.csv +20 -0
  3. because_train.csv +0 -0
README.md ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license:
5
+ - mit
6
+ pretty-name:
7
+ - because
8
+ task-categories:
9
+ - sequence-classification
10
+ - token-classification
11
+ configs:
12
+ - config_name: sequence-classification
13
+ description: Data prepared for identifying the presence of a causal relation in a text.
14
+ data_files: # Assuming your data files for this config are named like this
15
+ - train.csv
16
+ - test.csv
17
+ features: # These are the features expected *by this config*
18
+ - name: text
19
+ dtype: str
20
+ - name: seq_label
21
+ dtype:
22
+ class_label:
23
+ names:
24
+ '0': negative_causal_relation
25
+ '1': positive_causal_relation
26
+ task_templates: # Metadata for the Hub's display and task understanding
27
+ - task: text-classification
28
+ text_column: text
29
+ labels_column: seq_label
30
+ labels:
31
+ - negative_causal_relation
32
+ - positive_causal_relation
33
+
34
+ - config_name: pair-classification
35
+ description: Data prepared for classifying if two sequences have causal relationship.
36
+ data_files:
37
+ - train.csv
38
+ - test.csv
39
+ features:
40
+ - name: text_w_pairs
41
+ dtype: str
42
+ - name: pair_label
43
+ dtype:
44
+ class_label:
45
+ names:
46
+ '0': negative_causal_relation
47
+ '1': positive_causal_relation
48
+ task_templates:
49
+ - task: pair-classification
50
+ text_column: text_w_pairs
51
+ labels_column: pair_label
52
+ labels:
53
+ - negative_causal_relation
54
+ - positive_causal_relation
55
+
56
+ - config_name: token-classification
57
+ description: Data prepared for span detection of Cause and Effect entities within text.
58
+ data_files: # Assuming your data files for this config are named like this
59
+ - train.csv
60
+ - test.csv
61
+ features: # These are the features expected *by this config*
62
+ - name: text
63
+ dtype: str
64
+ - name: tokens # Tokenized version of text for aligning labels
65
+ dtype:
66
+ Sequence:
67
+ feature:
68
+ dtype: str
69
+ - name: labels # The token-level ground truth labels (BIO tags)
70
+ dtype:
71
+ Sequence: # This indicates 'target' is a list/sequence of labels
72
+ feature:
73
+ class_label: # Each element in the sequence is a categorical label
74
+ names:
75
+ '0': O # Outside of any annotated causal span
76
+ '1': B-Cause # Beginning of a Cause span
77
+ '2': I-Cause # Inside a Cause span
78
+ '3': B-Effect # Beginning of an Effect span
79
+ '4': I-Effect # Inside an Effect span
80
+ task_templates: # Metadata for the Hub's display and task understanding
81
+ - task: token-classification
82
+ text_column: tokens # For token classification, the input is typically tokens
83
+ labels_column: labels
84
+ entity_type_names:
85
+ - O
86
+ - B-Cause
87
+ - I-Cause
88
+ - B-Effect
89
+ - I-Effect
90
+ ---
91
+
92
+
93
+ # Causality Extraction Dataset
94
+
95
+ This repository contains the `Causality Extraction` dataset, designed for easy integration and loading within the Hugging Face ecosystem. It facilitates research and development in identifying causal relationships in text through three distinct configurations: **`sequence-classification`** (for presence of causality in a text), **`pair-classification`** (for classifying causal relationships between two identified text spans), and **`token-classification`** (for identifying causal spans).
96
+
97
+ ---
98
+
99
+ ## Table of Contents
100
+
101
+ - [Causality Extraction Dataset](#causality-extraction-dataset)
102
+ - [Table of Contents](#table-of-contents)
103
+ - [Dataset Description](#dataset-description)
104
+ - [What is this dataset about?](#what-is-this-dataset-about)
105
+ - [Why was this dataset created?](#why-was-this-dataset-created)
106
+ - [Who is the intended audience?](#who-is-the-intended-audience)
107
+ - [Configurations Overview](#configurations-overview)
108
+ - [1. `sequence-classification` Config](#1-sequence-classification-config)
109
+ - [Key Columns](#key-columns)
110
+ - [Data Instance Example](#data-instance-example)
111
+ - [2. `pair-classification` Config](#2-pair-classification-config)
112
+ - [Key Columns](#key-columns-1)
113
+ - [Data Instance Example](#data-instance-example-1)
114
+ - [3. `token-classification` Config](#3-token-classification-config)
115
+ - [Key Columns](#key-columns-2)
116
+ - [Data Instance Example](#data-instance-example-2)
117
+ - [Data Splits](#data-splits)
118
+ - [How to Load the Dataset](#how-to-load-the-dataset)
119
+ - [Dataset Creation and Maintenance](#dataset-creation-and-maintenance)
120
+ - [Source Data](#source-data)
121
+ - [Annotations](#annotations)
122
+ - [Preprocessing](#preprocessing)
123
+ - [Maintenance Plan](#maintenance-plan)
124
+ - [Considerations for Using the Dataset](#considerations-for-using-the-dataset)
125
+ - [Bias and Limitations](#bias-and-limitations)
126
+ - [Ethical Considerations](#ethical-considerations)
127
+ - [Licensing](#licensing)
128
+ - [Citations](#citations)
129
+ - [Contact](#contact)
130
+
131
+ ---
132
+
133
+ ## Dataset Description
134
+
135
+ ### What is this dataset about?
136
+ This dataset contains text examples annotated for causal relations at different granularities. It provides:
137
+ * A document-level label indicating the overall presence or absence of a causal relation within a text.
138
+ * Annotations for classifying the relationship between two specific text spans (which may or may not be explicitly marked).
139
+ * Token-level annotations (Cause/Effect spans) for extracting the specific components of causality.
140
+
141
+ This multi-faceted approach allows for both coarse-grained and fine-grained causal analysis, supporting various research and application needs.
142
+
143
+ ### Why was this dataset created?
144
+ This dataset was created to support research and development in automated causal extraction. By providing multiple levels of annotation, it enables the training and evaluation of models for:
145
+ * **Identifying if causality exists** in a given text (sequence-classification).
146
+ * **Classifying specific causal relationships between identified or provided argument pairs** (pair-classification).
147
+ * **Pinpointing the exact phrases** that represent causes and effects (token-classification).
148
+
149
+ This comprehensive approach can facilitate advancements in areas like event understanding, scientific text analysis, legal document analysis, and more.
150
+
151
+ ### Who is the intended audience?
152
+ This dataset is intended for researchers and developers in Natural Language Processing (NLP) and Artificial Intelligence (AI) who are interested in:
153
+ * Causal inference and extraction.
154
+ * Text classification (especially for relation detection).
155
+ * Token classification (e.g., Named Entity Recognition for causal spans).
156
+ * Pair classification and relation extraction.
157
+ * Multi-task learning approaches in NLP.
158
+
159
+ ---
160
+
161
+ ## Configurations Overview
162
+
163
+ This dataset offers the following configurations, each tailored for a specific causal extraction task. You select the desired configuration when loading the dataset using `load_dataset()`. All configurations share the same underlying data files (`train.csv`, `validation.csv`, `test.csv`), but interpret specific columns for their respective tasks.
164
+
165
+ ### 1. `sequence-classification` Config
166
+
167
+ This configuration provides text and a binary label indicating whether a causal relation is present in the text. It is designed for **sequence classification** tasks.
168
+
169
+ #### Key Columns
170
+ * `text`: `string` - The input text, representing the document or sentence to be classified. This serves as the **input feature** for models.
171
+ * `seq_label`: `int` - The binary label indicating the presence (`1`) or absence (`0`) of a causal relation. This is the **target label** for classification.
172
+ * `0`: `negative_causal_relation` (No causal relation detected)
173
+ * `1`: `positive_causal_relation` (A causal relation is present)
174
+
175
+ #### Data Instance Example
176
+ ```json
177
+ {
178
+ "text": "We have gotten the agreement of the Chairman and the Secretary, preliminary to any opening statements, to stay until 1 p.m. We will probably have some votes, so we will maximize our time.",
179
+ "seq_label": 1
180
+ }
181
+
182
+
183
+
184
+ ### 2. `pair-classification` Config
185
+
186
+ This configuration focuses on classifying the causal relationship between two pre-defined text spans within a larger text. It is designed for **pair-classification** tasks where the input often highlights the potential cause and effect arguments.
187
+
188
+ #### Key Columns
189
+ * `text_w_pairs`: `string` - The text where the potential causal arguments are explicitly marked (e.g., using special tags like `<ARG0>` and `<ARG1>`). This is the **input feature** for models.
190
+ * `pair_label`: `int` - The binary label indicating whether the relationship between the marked pair is causal (`1`) or not (`0`). This is the **target label** for classification.
191
+ * `0`: `negative_causal_relation` (No causal relation between the pair)
192
+ * `1`: `positive_causal_relation` (A causal relation exists between the pair)
193
+
194
+ #### Data Instance Example
195
+ ```json
196
+ {
197
+ "text_w_pairs": "We have gotten the agreement of the Chairman and the Secretary, preliminary to any opening statements, to stay until 1 p.m. <ARG0>We will probably have some votes</ARG0>, so <ARG1>we will maximize our time</ARG1>.",
198
+ "pair_label": 1
199
+ }
200
+ ```
201
+
202
+ ### 3. `token-classification` Config
203
+
204
+ This configuration provides pre-tokenized text and corresponding token-level labels (BIO tags) that mark the spans of Causes and Effects. It is suitable for **token classification** (span detection) tasks.
205
+
206
+ #### Key Columns
207
+ * `text`: `string` - The original raw text (provided for context).
208
+ * `tokens`: `list[str]` - The pre-tokenized version of the `text`. This is the **input feature** for models.
209
+ * `labels`: `list[int]` - A list of integer IDs, where each ID corresponds to a BIO tag for the respective token in the `tokens` list. This is the **target label** for span detection.
210
+ * `0`: `O` (Outside of any annotated causal span)
211
+ * `1`: `B-Cause` (Beginning of a Cause span)
212
+ * `2`: `I-Cause` (Inside a Cause span)
213
+ * `3`: `B-Effect` (Beginning of an Effect span)
214
+ * `4`: `I-Effect` (Inside an Effect span)
215
+
216
+ #### Data Instance Example
217
+ ```json
218
+ {
219
+ "text": "The heavy rain caused flooding in the streets.",
220
+ "tokens": ["The", "heavy", "rain", "caused", "flooding", "in", "the", "streets", "."],
221
+ "labels": [0, 1, 2, 0, 3, 4, 4, 4, 0] # Example BIO tags for Cause "heavy rain" and Effect "flooding in the streets"
222
+ }
223
+ ```
because_test.csv ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ corpus,doc_id,sent_id,eg_id,index,text,text_w_pairs,seq_label,pair_label,context,num_sents
2
+ because,Article247_327.ann,3,0,because_Article247_327.ann_3_0,They will then score one point for every subsequent issue or broadcast or Internet posting after the first offense is noted by Chatterbox if they continue not to report said inconvenient fact--and an additional two points on days when the news organization runs a follow-up without making note of said inconvenient fact.,They will then score one point for <ARG1>every subsequent issue or broadcast or Internet posting</ARG1> after <ARG0>the first offense is noted by Chatterbox</ARG0> if they continue not to report said inconvenient fact--and an additional two points on days when the news organization runs a follow-up without making note of said inconvenient fact.,1,0,,1
3
+ because,Article247_327.ann,3,1,because_Article247_327.ann_3_1,They will then score one point for every subsequent issue or broadcast or Internet posting after the first offense is noted by Chatterbox if they continue not to report said inconvenient fact--and an additional two points on days when the news organization runs a follow-up without making note of said inconvenient fact.,<ARG1>They will then score one point for every subsequent issue or broadcast or Internet posting after the first offense is noted by Chatterbox</ARG1> if <ARG0>they continue not to report said inconvenient fact</ARG0>--and an additional two points on days when the news organization runs a follow-up without making note of said inconvenient fact.,1,1,,1
4
+ because,Article247_327.ann,6,0,because_Article247_327.ann_6_0,"(Chatterbox would prefer not to invoke the phrase ""intellectual dishonesty,"" because it's pompous and falsely suggests that only intellectuals can be intellectually dishonest.","(<ARG1>Chatterbox would prefer not to invoke the phrase ""intellectual dishonesty</ARG1>,"" because <ARG0>it's pompous and falsely suggests that only intellectuals can be intellectually dishonest</ARG0>.",1,1,,1
5
+ because,Article247_327.ann,8,0,because_Article247_327.ann_8_0,"For the purposes of this survey, the Wall Street Journal will be counted as a separate and distinct publication from the Journal 's editorial page, because, in essence, it is.","<ARG1>For the purposes of this survey, the Wall Street Journal will be counted as a separate and distinct publication from the Journal 's editorial page</ARG1>, because, in essence, <ARG0>it is</ARG0>.",1,1,,1
6
+ because,Article247_327.ann,12,0,because_Article247_327.ann_12_0,The Journal editorial page gets one point for failing to note the pardon in its initial Op-Ed by Dorothy Rabinowitz on Feb. 19.,<ARG1>The Journal editorial page gets one point</ARG1> for <ARG0>failing to note the pardon in its initial Op-Ed by Dorothy Rabinowitz on Feb. 19</ARG0>.,1,1,,1
7
+ because,Article247_327.ann,13,0,because_Article247_327.ann_13_0,"Because it has published three times since the initial omission, it scores an additional three points.","Because <ARG0>it has published three times since the initial omission</ARG0>, <ARG1>it scores an additional three points</ARG1>.",1,1,,1
8
+ because,Article247_327.ann,14,0,because_Article247_327.ann_14_0,And because it published an editorial Feb. 22 taunting the rest of the press for not following it on the story--and still didn't mention the pardon--it scores an extra two points.,And because <ARG0>it published an editorial Feb. 22 taunting the rest of the press for not following it on the story--and still didn't mention the pardon</ARG0>--<ARG1>it scores an extra two points</ARG1>.,1,1,,1
9
+ because,Article247_327.ann,18,0,because_Article247_327.ann_18_0,But the Scientific Method does not permit any tinkering with the Indis Index 's scoring procedures.,But <ARG0>the Scientific Method</ARG0> does not permit <ARG1>any tinkering with the Indis Index 's scoring procedures</ARG1>.,0,0,,1
10
+ because,Article247_327.ann,16,0,because_Article247_327.ann_16_0,"Chatterbox feels certain that the Journal editorial page will provide some follow-up tomorrow to tonight's NBC broadcast of its own Broaddrick interview, which means that if the Journal editorial page continues to take no action it will be in the Indis Hall of Fame by Monday at the latest!","Chatterbox feels certain that the Journal editorial page will provide some follow-up tomorrow to tonight's NBC broadcast of its own Broaddrick interview, which means that if <ARG0>the Journal editorial page continues to take no action</ARG0> <ARG1>it will be in the Indis Hall of Fame by Monday at the latest</ARG1>!",1,1,,1
11
+ because,Article247_327.ann,20,0,because_Article247_327.ann_20_0,"When Chatterbox asked the Journal 's DC bureau chief, Alan Murray, who exercised good judgment in not breaking the Broaddrick story (and--full disclosure-- is Chatterbox's former boss), to comment about a Journal employee's feeding sources to the Times , he replied: ""I don't really have any comment on what the edit page did.","When <ARG0>Chatterbox asked the Journal 's DC bureau chief, Alan Murray, who exercised good judgment in not breaking the Broaddrick story (and--full disclosure-- is Chatterbox's former boss), to comment about a Journal employee's feeding sources to the Times</ARG0> , <ARG1>he replied: ""I don't really have any comment on what the edit page did. </ARG1>",0,0,,1
12
+ because,Article247_327.ann,22,0,because_Article247_327.ann_22_0,Which is what Journal news employees are instructed to say whenever the editorial page causes them cringing embarrassment.,Which is what Journal news employees are instructed to say whenever <ARG0>the editorial page</ARG0> causes <ARG1>them cringing embarrassment</ARG1>.,1,1,,1
13
+ because,Article247_327.ann,23,0,because_Article247_327.ann_23_0,"For her part, Rabinowitz explains to Chatterbox that her efforts on behalf of the Times were more indifferent than the Times made them sound.","For her part, Rabinowitz explains to Chatterbox that her efforts on behalf of the Times were more indifferent than <ARG0>the Times</ARG0> made <ARG1>them sound</ARG1>.",1,1,,1
14
+ because,Article247_327.ann,30,0,because_Article247_327.ann_30_0,She said she would if he called again.,She said <ARG1>she would</ARG1> if <ARG0>he called again</ARG0>.,0,0,,1
15
+ because,Article247_327.ann,17,0,because_Article247_327.ann_17_0,"Chatterbox considered but rejected the idea of awarding Rabinowitz bonus points for having ""eventually convinced"" Broaddrick to grant an interview to the New York Times (as the Times reports in today's story).","Chatterbox considered but rejected the idea of <ARG1>awarding Rabinowitz bonus points</ARG1> for <ARG0>having ""eventually convinced"" Broaddrick to grant an interview to the New York Times (as the Times reports in today's story)</ARG0>.",1,1,,1
16
+ because,Article247_327.ann,2,0,because_Article247_327.ann_2_0,Here's how it works: Publications that refuse to acknowledge (even if to refute the importance of) highly significant but inconvenient facts in their news or opinion coverage of controversial events will score one point for the initial offense.,Here's how it works: <ARG1>Publications that refuse to acknowledge (even if to refute the importance of) highly significant but inconvenient facts in their news or opinion coverage of controversial events will score one point</ARG1> for <ARG0>the initial offense</ARG0>.,1,1,,1
17
+ because,Article247_327.ann,3,2,because_Article247_327.ann_3_2,They will then score one point for every subsequent issue or broadcast or Internet posting after the first offense is noted by Chatterbox if they continue not to report said inconvenient fact--and an additional two points on days when the news organization runs a follow-up without making note of said inconvenient fact.,<ARG1>They will then score one point</ARG1> for <ARG0>every subsequent issue or broadcast or Internet posting after the first offense is noted by Chatterbox</ARG0> if they continue not to report said inconvenient fact--and an additional two points on days when the news organization runs a follow-up without making note of said inconvenient fact.,1,1,,1
18
+ because,Article247_327.ann,14,1,because_Article247_327.ann_14_1,And because it published an editorial Feb. 22 taunting the rest of the press for not following it on the story--and still didn't mention the pardon--it scores an extra two points.,And because it published an editorial Feb. 22 <ARG1>taunting the rest of the press</ARG1> for <ARG0>not following it on the story</ARG0>--and still didn't mention the pardon--it scores an extra two points.,1,1,,1
19
+ because,Article247_327.ann,19,0,because_Article247_327.ann_19_0,"And besides, Rabinowitz's efforts on the Times ' behalf weren't really unethical, just puzzling, given the two newspapers' intense rivalry.","And besides, <ARG1>Rabinowitz's efforts on the Times ' behalf weren't really unethical, just puzzling</ARG1>, given <ARG0>the two newspapers' intense rivalry</ARG0>.",1,1,,1
20
+ because,Article247_327.ann,31,0,because_Article247_327.ann_31_0,I passed this on to [ Times reporter] Felicity Barringer during our second day's interview chat.,<ARG1>I passed this on to [ Times reporter] Felicity Barringer</ARG1> during <ARG0>our second day's interview chat</ARG0>.,0,0,,1
because_train.csv ADDED
The diff for this file is too large to render. See raw diff