File size: 4,942 Bytes
fbd8392
 
 
 
 
 
 
0981124
 
 
 
fbd8392
 
d4c76c4
8d6d793
0981124
 
 
 
 
49f94fd
 
0981124
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4c76c4
 
0981124
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbd8392
 
 
bf25469
fbd8392
 
 
0981124
 
 
 
 
fbd8392
 
 
 
a9ebcea
fbd8392
 
 
 
 
a9ebcea
fbd8392
 
 
 
 
a9ebcea
fbd8392
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0981124
fbd8392
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
---
license: mit
task_categories:
- text-classification
- token-classification
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
tags:
- causality
pretty_name: BECausE v2
paperswithcode_id: ../paper/the-because-corpus-20-annotating-causality
configs:
- config_name: causality detection
  data_files:
  - split: train
    path: causality-detection/train.parquet
  - split: test
    path: causality-detection/test.parquet
  features:
  - name: index
    dtype: string
  - name: text
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': uncausal
          '1': causal
- config_name: causal candidate extraction
  data_files:
  - split: train
    path: causal-candidate-extraction/train.parquet
  - split: test
    path: causal-candidate-extraction/test.parquet
  features:
    - name: index
      dtype: string
    - name: text
      dtype: string
    - name: entity
      sequence:
        sequence: int32
- config_name: causality identification
  data_files:
  - split: train
    path: causality-identification/train.parquet
  - split: test
    path: causality-identification/test.parquet
  features:
  - name: index
    dtype: string
  - name: text
    dtype: string
  - name: relations
    list:
    - name: relationship
      dtype:
        class_label:
          names:
            '0': no-rel  # Does not really make sense but exists to have the same labels as the classification task
            '1': causal
    - name: first
      dtype: string
    - name: second
      dtype: string
train-eval-index:
- config: causality detection
  task: text-classification
  task_id: text_classification
  splits:
    train_split: train
    eval_split: test
  col_mapping:
    text: text
    label: label
  metrics:
  - type: accuracy
  - type: precision
  - type: recall
  - type: f1
- config: causal candidate extraction
  task: token-classification
  task_id: token_classification
  splits:
    train_split: train
    eval_split: test
  metrics:
  - type: accuracy
  - type: precision
  - type: recall
  - type: f1
- config: causality identification
  task: text-classification
  task_id: text_classification
  splits:
    train_split: train
    eval_split: test
  metrics:
  - type: accuracy
  - type: precision
  - type: recall
  - type: f1
---

> [!NOTE]  
> This repository integrates the BECausE corpus into hf datasets. It is in conformance with BECausE's MIT license. Please find the original dataset
> [here](https://github.com/duncanka/BECAUSE). We used the [UniCausal](https://github.com/tanfiona/UniCausal/tree/main/data/splits) reformatting of the data as the basis
> for this repository. Please see the [citations](#citations) at the end of this README.

## Dataset Description

- **Repository:** https://github.com/duncanka/BECAUSE
- **Paper:** [The BECauSE Corpus 2.0: Annotating Causality and Overlapping Relations](https://doi.org/10.18653/v1/W17-0812)

# Usage
## Causality Detection
```py
from datasets import load_dataset
dataset = load_dataset("webis/BECauSEv2", "causality detection")
```

## Causal Candidate Extraction
```py
from datasets import load_dataset
dataset = load_dataset("webis/BECauSEv2", "causal candidate extraction")
```

## Causality Identification
```py
from datasets import load_dataset
dataset = load_dataset("webis/BECauSv2", "causality identification")
```

# Citations

The BECauSE v2.0 paper by [Dunietz et al., 2017](https://www.cs.cmu.edu/~jdunietz/publications/because-v2.pdf):
```bib
@inproceedings{dunietz:2017,
  title = {The {{BECauSE Corpus}} 2.0: {{Annotating Causality}} and {{Overlapping Relations}}},
  shorttitle = {The {{BECauSE Corpus}} 2.0},
  booktitle = {Proceedings of the 11th {{Linguistic Annotation Workshop}}, {{LAW}}@{{EACL}} 2017, {{Valencia}}, {{Spain}}, {{April}} 3, 2017},
  author = {Dunietz, Jesse and Levin, Lori S. and Carbonell, Jaime G.},
  editor = {Schneider, Nathan and Xue, Nianwen},
  year = {2017},
  pages = {95--104},
  publisher = {Association for Computational Linguistics},
  doi = {10.18653/V1/W17-0812}
}
```

UniCausal by [Tan et al., 2023](https://link.springer.com/chapter/10.1007/978-3-031-39831-5_23) &mdash; who's dataformat we used to make BECausE compatible with hf datasets:
```bib
@inproceedings{tan:2023,
  title = {{{UniCausal}}: {{Unified Benchmark}} and {{Repository}} for {{Causal Text Mining}}},
  shorttitle = {{{UniCausal}}},
  booktitle = {Big {{Data Analytics}} and {{Knowledge Discovery}} - 25th {{International Conference}}, {{DaWaK}} 2023, {{Penang}}, {{Malaysia}}, {{August}} 28-30, 2023, {{Proceedings}}},
  author = {Tan, Fiona Anting and Zuo, Xinyu and Ng, See-Kiong},
  editor = {Wrembel, Robert and Gamper, Johann and Kotsis, Gabriele and Tjoa, A. Min and Khalil, Ismail},
  year = {2023},
  series = {Lecture {{Notes}} in {{Computer Science}}},
  volume = {14148},
  pages = {248--262},
  publisher = {Springer},
  doi = {10.1007/978-3-031-39831-5_23}
}
```