thagen commited on
Commit
0981124
·
1 Parent(s): 883cb0e

added data

Browse files
README.md CHANGED
@@ -5,14 +5,112 @@ task_categories:
5
  - token-classification
6
  language:
7
  - en
 
 
 
 
8
  tags:
9
  - causality
10
  pretty_name: BECausE
11
  paperswithcode_id: ../paper/the-because-corpus-20-annotating-causality
12
- config_names:
13
- - causality detection
14
- - causal candidate extraction
15
- - causality identification
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ---
17
 
18
  > [!NOTE]
@@ -20,6 +118,11 @@ config_names:
20
  > [here](https://github.com/duncanka/BECAUSE). We used the [UniCausal](https://github.com/tanfiona/UniCausal/tree/main/data/splits) reformatting of the data as the basis
21
  > for this repository. Please see the [citations](#citations) at the end of this README.
22
 
 
 
 
 
 
23
  # Usage
24
  ## Causality Detection
25
  ```py
@@ -56,7 +159,7 @@ The BECauSE v2.0 paper by [Dunietz et al., 2017](https://www.cs.cmu.edu/~jduniet
56
  }
57
  ```
58
 
59
- UniCausal by [Tan et al., 2023](https://link.springer.com/chapter/10.1007/978-3-031-39831-5_23) -- who's dataformat we used to make BECausE compatible with hf datasets:
60
  ```bib
61
  @inproceedings{tan:2023,
62
  title = {{{UniCausal}}: {{Unified Benchmark}} and {{Repository}} for {{Causal Text Mining}}},
 
5
  - token-classification
6
  language:
7
  - en
8
+ multilinguality:
9
+ - monolingual
10
+ size_categories:
11
+ - 1K<n<10K
12
  tags:
13
  - causality
14
  pretty_name: BECausE
15
  paperswithcode_id: ../paper/the-because-corpus-20-annotating-causality
16
+ configs:
17
+ - config_name: causality detection
18
+ data_files:
19
+ - split: train
20
+ path: causality-detection/train.parquet
21
+ - split: dev
22
+ path: causality-detection/dev.parquet
23
+ # - split: test
24
+ # path: causality-detection/test.parquet
25
+ features:
26
+ - name: index
27
+ dtype: string
28
+ - name: text
29
+ dtype: string
30
+ - name: label
31
+ dtype:
32
+ class_label:
33
+ names:
34
+ '0': uncausal
35
+ '1': causal
36
+ - config_name: causal candidate extraction
37
+ data_files:
38
+ - split: train
39
+ path: causal-candidate-extraction/train.parquet
40
+ # - split: dev
41
+ # path: causal-candidate-extraction/dev.parquet
42
+ - split: test
43
+ path: causal-candidate-extraction/test.parquet
44
+ features:
45
+ - name: index
46
+ dtype: string
47
+ - name: tokens
48
+ sequence: string
49
+ - name: entity
50
+ sequence:
51
+ sequence: int32
52
+ - config_name: causality identification
53
+ data_files:
54
+ - split: train
55
+ path: causality-identification/train.parquet
56
+ # - split: dev
57
+ # path: causal-candidate-extraction/dev.parquet
58
+ - split: test
59
+ path: causality-identification/test.parquet
60
+ features:
61
+ - name: index
62
+ dtype: string
63
+ - name: text
64
+ dtype: string
65
+ - name: relations
66
+ list:
67
+ - name: relationship
68
+ dtype:
69
+ class_label:
70
+ names:
71
+ '0': no-rel # Does not really make sense but exists to have the same labels as the classification task
72
+ '1': causal
73
+ - name: first
74
+ dtype: string
75
+ - name: second
76
+ dtype: string
77
+ train-eval-index:
78
+ - config: causality detection
79
+ task: text-classification
80
+ task_id: text_classification
81
+ splits:
82
+ train_split: train
83
+ eval_split: test
84
+ col_mapping:
85
+ text: text
86
+ label: label
87
+ metrics:
88
+ - type: accuracy
89
+ - type: precision
90
+ - type: recall
91
+ - type: f1
92
+ - config: causal candidate extraction
93
+ task: token-classification
94
+ task_id: token_classification
95
+ splits:
96
+ train_split: train
97
+ eval_split: test
98
+ metrics:
99
+ - type: accuracy
100
+ - type: precision
101
+ - type: recall
102
+ - type: f1
103
+ - config: causality identification
104
+ task: text-classification
105
+ task_id: text_classification
106
+ splits:
107
+ train_split: train
108
+ eval_split: test
109
+ metrics:
110
+ - type: accuracy
111
+ - type: precision
112
+ - type: recall
113
+ - type: f1
114
  ---
115
 
116
  > [!NOTE]
 
118
  > [here](https://github.com/duncanka/BECAUSE). We used the [UniCausal](https://github.com/tanfiona/UniCausal/tree/main/data/splits) reformatting of the data as the basis
119
  > for this repository. Please see the [citations](#citations) at the end of this README.
120
 
121
+ ## Dataset Description
122
+
123
+ - **Repository:** https://github.com/duncanka/BECAUSE
124
+ - **Paper:** [The BECauSE Corpus 2.0: Annotating Causality and Overlapping Relations](https://doi.org/10.18653/v1/W17-0812)
125
+
126
  # Usage
127
  ## Causality Detection
128
  ```py
 
159
  }
160
  ```
161
 
162
+ UniCausal by [Tan et al., 2023](https://link.springer.com/chapter/10.1007/978-3-031-39831-5_23) &mdash; who's dataformat we used to make BECausE compatible with hf datasets:
163
  ```bib
164
  @inproceedings{tan:2023,
165
  title = {{{UniCausal}}: {{Unified Benchmark}} and {{Repository}} for {{Causal Text Mining}}},
causal-candidate-extraction/test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd820e1d8a340c9b3417593de5628bc713c9e97a55495508ac10e2d41cf52444
3
+ size 5469
causal-candidate-extraction/train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e069976da5d897facbd7636dfa6d2d57e262423e2e239fc49ae430227bc3f74
3
+ size 59878
causality-detection/test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f080a50b0a0a97472c2f65b6cb8d5d18ff1290f98a2f47076edfce4c924a4f42
3
+ size 5304
causality-detection/train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31bb185248965b2b559e92316cd439f5a818b0768b28c5a66a1a0a37826498e0
3
+ size 51859
causality-identification/test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f88248bbd785127271e4315af735f320cbc6a85533dfee93e809656497dcfd21
3
+ size 6390
causality-identification/train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8407c49547a94cd3e30fd2e611069a3ace78c74f285a17676cb4ff38bcc1bd3b
3
+ size 59625
conversion_script.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+
3
+ """
4
+ Run this script as ./conversion_script.py to convert the UniCausal BECauSE files to HF-compatible parquet files.
5
+ """
6
+
7
+ # 1) Install dependencies:
8
+ # pip install pandas pyarrow
9
+ # 2) Download these files:
10
+ # - https://raw.githubusercontent.com/tanfiona/UniCausal/refs/heads/main/data/grouped/splits/because_test.csv
11
+ # - https://raw.githubusercontent.com/tanfiona/UniCausal/refs/heads/main/data/grouped/splits/because_train.csv
12
+
13
+ import ast
14
+ from collections import defaultdict
15
+ import re
16
+ from typing import Literal, Union
17
+
18
+ import pandas as pd
19
+
20
+
21
+ Split = Literal["train", "dev", "test"]
22
+
23
+ RelationCauseEffect: int = 1
24
+
25
+ def __extract_entities_and_relations(row) -> tuple:
26
+ """
27
+ Converts ["<ARG0>Bla</Arg0> bla <ARG1>Bla</ARG1>", "Bla <ARG0>bla</ARG0> <ARG1>Bla</ARG1>"] to
28
+ ["Bla", " ", "bla", " ", "Bla"], [[1], [], [2], [], [3]], {relationship: 1, "first": 0, "second": 1}
29
+ """
30
+ text, withpairs = row["text"], row["causal_text_w_pairs"]
31
+ # Step 1: Remove all <SIG> tags
32
+ withpairs = [re.sub(r"<(/?)SIG(\d+)>", "", t) for t in withpairs]
33
+ # Step 2: Check that all texts are the same if we remove the ARG tags
34
+ without_tags = [re.sub(r"<(/?)ARG(\d+)>", "", t) for t in withpairs]
35
+ assert all(x == text for x in without_tags) # All texts without tags must be equal to text
36
+ # Step 3: Iteratively update the tags
37
+ splits: list[str] = [text]
38
+ tags: list[list[int]] = [[]]
39
+ relations: list[tuple[int, int, int]] = []
40
+ def split_at_charidx(idx: int) -> int:
41
+ for i, s in enumerate(splits):
42
+ if idx < len(s):
43
+ if idx == 0: # We are already at the beginning of a str; no need to split
44
+ return i
45
+ else: # Need to split
46
+ # Concurrent editing an iterating the list; OK here since we return right after
47
+ splits.insert(i+1, s[idx:])
48
+ splits[i] = s[:idx]
49
+ tags.insert(i+1, tags[i].copy())
50
+ return i+1
51
+ idx -= len(s)
52
+ return len(splits)
53
+ def minify(tags: list[list[int]]) -> tuple[list[list[int]], dict[int, int]]:
54
+ """
55
+ Joins entities that denote the same spans.
56
+ Maps, e.g., [[0], [], [1, 2], [], [3], []] to [[0], [], [1], [], [2], []]
57
+ """
58
+ tag2pos = defaultdict(list) # Maps tags to the positions they occur in
59
+ for idx, lst in enumerate(tags):
60
+ for x in lst:
61
+ tag2pos[x].append(idx)
62
+ pos2tag = defaultdict(list)
63
+ for tag, pos in tag2pos.items():
64
+ pos2tag[tuple(pos)].append(tag)
65
+ newtags = [[] for _ in range(len(tags))]
66
+ tagmap: dict[int, int] = dict()
67
+ for i, (pos, tags) in enumerate(pos2tag.items()):
68
+ for t in tags:
69
+ tagmap[t] = i
70
+ for p in pos:
71
+ newtags[p].append(i)
72
+ return newtags, tagmap
73
+ nexttag: int = 0
74
+ for t in withpairs:
75
+ curtags: set[int] = set()
76
+ offset: int = 0
77
+ tagmap: dict[int, int] = dict()
78
+ for match in re.finditer(r"(.*?)<(/?)ARG(\d+)>", t):
79
+ # Put the text span from offset to offset+len(match[1]) into a new entity and set the entity label
80
+ # appropriately; should do nothing if the entity is already separated.
81
+ startidx = split_at_charidx(offset) # Will never split but we can use it to get the index
82
+ stopidx = split_at_charidx(offset + len(match[1]))
83
+ for i in range(startidx, stopidx):
84
+ tags[i].extend(curtags)
85
+ if match[2] == "":
86
+ if int(match[3]) not in tagmap:
87
+ tagmap[int(match[3])] = nexttag
88
+ nexttag += 1
89
+ curtags.add(tagmap[int(match[3])])
90
+ else:
91
+ curtags.remove(tagmap[int(match[3])])
92
+ offset += len(match[1])
93
+ # Each entry in withpairs contains exactly one cause (ARG0) and effect (ARG1)
94
+ relations.append((RelationCauseEffect, tagmap[0], tagmap[1]))
95
+ tags, tagmap = minify(tags)
96
+ for i in range(len(relations)):
97
+ relations[i] = (relations[i][0], tagmap[relations[i][1]], tagmap[relations[i][2]])
98
+ return (splits, tags, relations)
99
+
100
+ def convert_for_causality_detection(split: Split) -> None:
101
+ df = pd.read_csv(f"because_{split}.csv", converters={"causal_text_w_pairs": lambda x: ast.literal_eval(x) if x else [] })
102
+ df["label"] = df["causal_text_w_pairs"].apply(lambda x: 0 if len(x) == 0 else 1)
103
+ df = df.set_index("index")
104
+ df = df[["label", "text"]]
105
+ df.to_parquet(f"./causality-detection/{split}.parquet", engine="pyarrow")
106
+
107
+ def convert_for_causal_candidate_extraction(split: Split) -> None:
108
+ def map_list_to_tokens(row):
109
+ splits, tags, _ = __extract_entities_and_relations(row)
110
+ return pd.Series((splits, tags))
111
+ df = pd.read_csv(f"because_{split}.csv", converters={"causal_text_w_pairs": lambda x: ast.literal_eval(x) if x else [] })
112
+ df[["tokens", "entity"]] = df[["text", "causal_text_w_pairs"]].apply(map_list_to_tokens, axis=1)
113
+ df = df[["index", "tokens", "entity"]].set_index("index")
114
+ df.to_parquet(f"./causal-candidate-extraction/{split}.parquet", engine="pyarrow")
115
+
116
+ def convert_for_causality_identification(split: Split) -> None:
117
+ def map_to_labels(row):
118
+ splits, tags, relations = __extract_entities_and_relations(row)
119
+ text: str = ""
120
+ cur_ents: set[int] = set()
121
+ for s, t in zip(splits, tags):
122
+ for newent in (set(t)-cur_ents):
123
+ text += f"<e{newent+1}>"
124
+ for oldent in (cur_ents-set(t)):
125
+ text += f"</e{oldent+1}>"
126
+ cur_ents = set(t)
127
+ text += s
128
+ reldict: list[dict[str, Union[int, str]]] = []
129
+ for rtype, rfirst, rsecond in relations:
130
+ reldict.append({"relationship": rtype, "first": f"e{rfirst+1}", "second": f"e{rsecond+1}"})
131
+ return pd.Series((text, reldict))
132
+ df = pd.read_csv(f"because_{split}.csv", converters={"causal_text_w_pairs": lambda x: ast.literal_eval(x) if x else [] })
133
+ df[["text", "relations"]] = df[["text", "causal_text_w_pairs"]].apply(map_to_labels, axis=1)
134
+ df = df[["index", "text", "relations"]].set_index("index")
135
+ return df.to_parquet(f"./causality-identification/{split}.parquet", engine="pyarrow")
136
+
137
+ convert_for_causality_detection("test")
138
+ # convert_for_causality_detection("dev")
139
+ convert_for_causality_detection("train")
140
+ convert_for_causal_candidate_extraction("test")
141
+ #convert_for_causal_candidate_extraction("dev")
142
+ convert_for_causal_candidate_extraction("train")
143
+ convert_for_causality_identification("test")
144
+ #convert_for_causality_identification("dev")
145
+ convert_for_causality_identification("train")