ArneBinder commited on
Commit
69bc0e9
·
verified ·
1 Parent(s): 171478c

use pie-modules instead of pytorch-ie

Browse files

see https://github.com/ArneBinder/pie-datasets/pull/204 for further information

Files changed (3) hide show
  1. README.md +10 -10
  2. requirements.txt +2 -2
  3. sciarg.py +66 -8
README.md CHANGED
@@ -9,7 +9,7 @@ Therefore, the `sciarg` dataset as described here follows the data structure fro
9
  ```python
10
  from pie_datasets import load_dataset
11
  from pie_datasets.builders.brat import BratDocumentWithMergedSpans, BratDocument
12
- from pytorch_ie.documents import TextDocumentWithLabeledMultiSpansBinaryRelationsAndLabeledPartitions, TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
13
 
14
  # load default version
15
  dataset = load_dataset("pie/sciarg")
@@ -74,20 +74,20 @@ See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema)
74
 
75
  The dataset provides document converters for the following target document types:
76
 
77
- - `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
78
  - `LabeledSpans`, converted from `BratDocument`'s `spans`
79
  - labels: `background_claim`, `own_claim`, `data`
80
  - if `spans` contain whitespace at the beginning and/or the end, the whitespace are trimmed out.
81
  - `BinraryRelations`, converted from `BratDocument`'s `relations`
82
  - labels: `supports`, `contradicts`, `semantically_same`, `parts_of_same`
83
  - if the `relations` label is `semantically_same` or `parts_of_same`, they are merged if they are the same arguments after sorting.
84
- - `pytorch_ie.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions`
85
  - `LabeledSpans`, as above
86
  - `BinaryRelations`, as above
87
  - `LabeledPartitions`, partitioned `BratDocument`'s `text`, according to the paragraph, using regex.
88
  - labels: `title`, `abstract`, `H1`
89
 
90
- See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
91
  definitions.
92
 
93
  ### Data Splits
@@ -155,7 +155,7 @@ possibly since [Lauscher et al., 2018](https://aclanthology.org/W18-5206/) prese
155
 
156
  - `supports`:
157
  - if the assumed veracity of *b* increases with the veracity of *a*
158
- - "Usually, this relationship exists from data to claim, but in many cases a claim might support another claim. Other combinations are still possible." - (*Annotation Guidelines*, p. 3)
159
  - `contradicts`:
160
  - if the assumed veracity of *b* decreases with the veracity of *a*
161
  - It is a **bi-directional**, i.e., symmetric relationship.
@@ -183,7 +183,7 @@ Above: Diagram from *Annotation Guildelines* (p.6)
183
 
184
  Below: Subset of relations in `A01`
185
 
186
- ![sample2](img/sciarg-sam.png)
187
 
188
  ### Collected Statistics after Document Conversion
189
 
@@ -335,7 +335,7 @@ python src/evaluate_documents.py dataset=sciarg_base metric=count_text_tokens
335
 
336
  ### Curation Rationale
337
 
338
- "\[C\]omputational methods for analyzing scientific writing are becoming paramount...there is no publicly available corpus of scientific publications (in English), annotated with fine-grained argumentative structures. ...\[A\]rgumentative structure of scientific publications should not be studied in isolation, but rather in relation to other rhetorical aspects, such as the
339
  discourse structure.
340
  (Lauscher et al. 2018, p. 40)
341
 
@@ -343,7 +343,7 @@ discourse structure.
343
 
344
  #### Initial Data Collection and Normalization
345
 
346
- "\[W\]e randomly selected a set of 40 documents, available in PDF format, among a bigger collection provided by experts in the domain, who pre-selected a representative sample of articles in Computer Graphics. Articles were classified into four important subjects in this area: Skinning, Motion Capture, Fluid Simulation and Cloth Simulation. We included in the corpus 10 highly representative articles for each subject." (Fisas et al. 2015, p. 44)
347
 
348
  "The Corpus includes 10,789 sentences, with an average of 269.7 sentences per document." (p. 45)
349
 
@@ -367,7 +367,7 @@ The annotation were done using BRAT Rapid Annotation Tool ([Stenetorp et al., 20
367
 
368
  ### Personal and Sensitive Information
369
 
370
- \[More Information Needed\]
371
 
372
  ## Considerations for Using the Data
373
 
@@ -384,7 +384,7 @@ of the different rhetorical aspects of scientific language (which we dub *scitor
384
 
385
  "While the background claims and own claims are on average of similar length (85 and 87 characters, respectively), they are much longer than data components (average of 25 characters)."
386
 
387
- "\[A\]nnotators identified an average of 141 connected component per publication...This indicates that either authors write very short argumentative chains or that our annotators had difficulties noticing long-range argumentative dependencies."
388
 
389
  (Lauscher et al. 2018, p.43)
390
 
 
9
  ```python
10
  from pie_datasets import load_dataset
11
  from pie_datasets.builders.brat import BratDocumentWithMergedSpans, BratDocument
12
+ from pie_modules.documents import TextDocumentWithLabeledMultiSpansBinaryRelationsAndLabeledPartitions, TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions
13
 
14
  # load default version
15
  dataset = load_dataset("pie/sciarg")
 
74
 
75
  The dataset provides document converters for the following target document types:
76
 
77
+ - `pie_modules.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
78
  - `LabeledSpans`, converted from `BratDocument`'s `spans`
79
  - labels: `background_claim`, `own_claim`, `data`
80
  - if `spans` contain whitespace at the beginning and/or the end, the whitespace are trimmed out.
81
  - `BinraryRelations`, converted from `BratDocument`'s `relations`
82
  - labels: `supports`, `contradicts`, `semantically_same`, `parts_of_same`
83
  - if the `relations` label is `semantically_same` or `parts_of_same`, they are merged if they are the same arguments after sorting.
84
+ - `pie_modules.documents.TextDocumentWithLabeledSpansBinaryRelationsAndLabeledPartitions`
85
  - `LabeledSpans`, as above
86
  - `BinaryRelations`, as above
87
  - `LabeledPartitions`, partitioned `BratDocument`'s `text`, according to the paragraph, using regex.
88
  - labels: `title`, `abstract`, `H1`
89
 
90
+ See [here](https://github.com/ArneBinder/pie-modules/blob/main/src/pie_modules/documents.py) for the document type
91
  definitions.
92
 
93
  ### Data Splits
 
155
 
156
  - `supports`:
157
  - if the assumed veracity of *b* increases with the veracity of *a*
158
+ - "Usually, this relationship exists from data to claim, but in many cases a claim might support another claim. Other combinations are still possible." - (*Annotation Guidelines*, p. 3)
159
  - `contradicts`:
160
  - if the assumed veracity of *b* decreases with the veracity of *a*
161
  - It is a **bi-directional**, i.e., symmetric relationship.
 
183
 
184
  Below: Subset of relations in `A01`
185
 
186
+ ![sample2](img/sciarg_train_0.png)
187
 
188
  ### Collected Statistics after Document Conversion
189
 
 
335
 
336
  ### Curation Rationale
337
 
338
+ "[C]omputational methods for analyzing scientific writing are becoming paramount...there is no publicly available corpus of scientific publications (in English), annotated with fine-grained argumentative structures. ...[A]rgumentative structure of scientific publications should not be studied in isolation, but rather in relation to other rhetorical aspects, such as the
339
  discourse structure.
340
  (Lauscher et al. 2018, p. 40)
341
 
 
343
 
344
  #### Initial Data Collection and Normalization
345
 
346
+ "[W]e randomly selected a set of 40 documents, available in PDF format, among a bigger collection provided by experts in the domain, who pre-selected a representative sample of articles in Computer Graphics. Articles were classified into four important subjects in this area: Skinning, Motion Capture, Fluid Simulation and Cloth Simulation. We included in the corpus 10 highly representative articles for each subject." (Fisas et al. 2015, p. 44)
347
 
348
  "The Corpus includes 10,789 sentences, with an average of 269.7 sentences per document." (p. 45)
349
 
 
367
 
368
  ### Personal and Sensitive Information
369
 
370
+ [More Information Needed]
371
 
372
  ## Considerations for Using the Data
373
 
 
384
 
385
  "While the background claims and own claims are on average of similar length (85 and 87 characters, respectively), they are much longer than data components (average of 25 characters)."
386
 
387
+ "[A]nnotators identified an average of 141 connected component per publication...This indicates that either authors write very short argumentative chains or that our annotators had difficulties noticing long-range argumentative dependencies."
388
 
389
  (Lauscher et al. 2018, p.43)
390
 
requirements.txt CHANGED
@@ -1,3 +1,3 @@
1
- pie-datasets>=0.6.0,<0.11.0
2
- pie-modules>=0.10.8,<0.12.0
3
  networkx>=3.0.0,<4.0.0
 
1
+ pie-datasets>=0.10.11,<0.11.0
2
+ pie-modules>=0.15.9,<0.16.0
3
  networkx>=3.0.0,<4.0.0
sciarg.py CHANGED
@@ -1,14 +1,15 @@
 
1
  import logging
2
  from typing import Union
3
 
 
4
  from pie_modules.document.processing import (
5
  RegexPartitioner,
6
  RelationArgumentSorter,
7
  SpansViaRelationMerger,
8
  TextSpanTrimmer,
9
  )
10
- from pytorch_ie.core import Document
11
- from pytorch_ie.documents import (
12
  TextDocumentWithLabeledMultiSpansAndBinaryRelations,
13
  TextDocumentWithLabeledMultiSpansBinaryRelationsAndLabeledPartitions,
14
  TextDocumentWithLabeledSpansAndBinaryRelations,
@@ -16,7 +17,12 @@ from pytorch_ie.documents import (
16
  )
17
 
18
  from pie_datasets.builders import BratBuilder, BratConfig
19
- from pie_datasets.builders.brat import BratDocument, BratDocumentWithMergedSpans
 
 
 
 
 
20
  from pie_datasets.core.dataset import DocumentConvertersType
21
  from pie_datasets.document.processing import Caster, Pipeline
22
 
@@ -26,6 +32,35 @@ URL = "http://data.dws.informatik.uni-mannheim.de/sci-arg/compiled_corpus.zip"
26
  SPLIT_PATHS = {"train": "compiled_corpus"}
27
 
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  def get_common_converter_pipeline_steps(target_document_type: type[Document]) -> dict:
30
  return dict(
31
  cast=Caster(
@@ -106,13 +141,36 @@ class SciArg(BratBuilder):
106
  def _generate_document(self, example, **kwargs):
107
  document = super()._generate_document(example, **kwargs)
108
  if self.config.resolve_parts_of_same:
109
- document = SpansViaRelationMerger(
110
- relation_layer="relations",
 
 
 
 
 
 
 
 
 
 
 
111
  link_relation_label="parts_of_same",
112
  create_multi_spans=True,
113
- result_document_type=BratDocument,
114
- result_field_mapping={"spans": "spans", "relations": "relations"},
115
- )(document)
 
 
 
 
 
 
 
 
 
 
 
 
116
  else:
117
  # some documents have duplicate relations, remove them
118
  remove_duplicate_relations(document)
 
1
+ import dataclasses
2
  import logging
3
  from typing import Union
4
 
5
+ from pie_core import AnnotationLayer, Document, annotation_field
6
  from pie_modules.document.processing import (
7
  RegexPartitioner,
8
  RelationArgumentSorter,
9
  SpansViaRelationMerger,
10
  TextSpanTrimmer,
11
  )
12
+ from pie_modules.documents import (
 
13
  TextDocumentWithLabeledMultiSpansAndBinaryRelations,
14
  TextDocumentWithLabeledMultiSpansBinaryRelationsAndLabeledPartitions,
15
  TextDocumentWithLabeledSpansAndBinaryRelations,
 
17
  )
18
 
19
  from pie_datasets.builders import BratBuilder, BratConfig
20
+ from pie_datasets.builders.brat import (
21
+ BratAttribute,
22
+ BratDocument,
23
+ BratDocumentWithMergedSpans,
24
+ BratNote,
25
+ )
26
  from pie_datasets.core.dataset import DocumentConvertersType
27
  from pie_datasets.document.processing import Caster, Pipeline
28
 
 
32
  SPLIT_PATHS = {"train": "compiled_corpus"}
33
 
34
 
35
+ @dataclasses.dataclass
36
+ class ConvertedBratDocument(TextDocumentWithLabeledMultiSpansAndBinaryRelations):
37
+ span_attributes: AnnotationLayer[BratAttribute] = annotation_field(
38
+ target="labeled_multi_spans"
39
+ )
40
+ relation_attributes: AnnotationLayer[BratAttribute] = annotation_field(
41
+ target="binary_relations"
42
+ )
43
+ notes: AnnotationLayer[BratNote] = annotation_field(
44
+ targets=[
45
+ "labeled_multi_spans",
46
+ "binary_relations",
47
+ "span_attributes",
48
+ "relation_attributes",
49
+ ]
50
+ )
51
+
52
+
53
+ @dataclasses.dataclass
54
+ class ConvertedBratDocumentWithMergedSpans(TextDocumentWithLabeledSpansAndBinaryRelations):
55
+ span_attributes: AnnotationLayer[BratAttribute] = annotation_field(target="labeled_spans")
56
+ relation_attributes: AnnotationLayer[BratAttribute] = annotation_field(
57
+ target="binary_relations"
58
+ )
59
+ notes: AnnotationLayer[BratNote] = annotation_field(
60
+ targets=["labeled_spans", "binary_relations", "span_attributes", "relation_attributes"]
61
+ )
62
+
63
+
64
  def get_common_converter_pipeline_steps(target_document_type: type[Document]) -> dict:
65
  return dict(
66
  cast=Caster(
 
141
  def _generate_document(self, example, **kwargs):
142
  document = super()._generate_document(example, **kwargs)
143
  if self.config.resolve_parts_of_same:
144
+ # we need to convert the document to a different type to be able to merge the spans:
145
+ # SpansViaRelationMerger expects the spans to be of type LabeledSpan,
146
+ # but the document has spans of type BratSpan
147
+ converted_doc = document.as_type(
148
+ ConvertedBratDocumentWithMergedSpans,
149
+ field_mapping={
150
+ "spans": "labeled_spans",
151
+ "relations": "binary_relations",
152
+ },
153
+ keep_remaining=True,
154
+ )
155
+ merged_document = SpansViaRelationMerger(
156
+ relation_layer="binary_relations",
157
  link_relation_label="parts_of_same",
158
  create_multi_spans=True,
159
+ result_document_type=ConvertedBratDocument,
160
+ result_field_mapping={
161
+ "labeled_spans": "labeled_multi_spans",
162
+ "binary_relations": "binary_relations",
163
+ "span_attributes": "span_attributes",
164
+ "relation_attributes": "relation_attributes",
165
+ "notes": "notes",
166
+ },
167
+ )(converted_doc)
168
+ # convert back to BratDocument
169
+ document = merged_document.as_type(
170
+ BratDocument,
171
+ field_mapping={"labeled_multi_spans": "spans", "binary_relations": "relations"},
172
+ keep_remaining=True,
173
+ )
174
  else:
175
  # some documents have duplicate relations, remove them
176
  remove_duplicate_relations(document)