Commit
·
b952da6
1
Parent(s):
6a6dac9
Update README.md
Browse files
README.md
CHANGED
|
@@ -18,8 +18,6 @@ size_categories:
|
|
| 18 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 19 |
- [Languages](#languages)
|
| 20 |
- [Dataset Structure](#dataset-structure)
|
| 21 |
-
- [Data Instances](#data-instances)
|
| 22 |
-
- [Data Fields](#data-fields)
|
| 23 |
- [Data Splits](#data-splits)
|
| 24 |
- [Dataset Creation](#dataset-creation)
|
| 25 |
- [Curation Rationale](#curation-rationale)
|
|
@@ -84,12 +82,38 @@ Morante and Blanco. 2012](https://aclanthology.org/S12-1035/)), SFU Review ([Kon
|
|
| 84 |
- id (int): token id
|
| 85 |
- ws (boolean): indicates if the token is followed by a white space
|
| 86 |
|
| 87 |
-
### Data Instances
|
| 88 |
-
[More Information Needed]
|
| 89 |
-
### Data Fields
|
| 90 |
-
[More Information Needed]
|
| 91 |
### Data Splits
|
| 92 |
For each subset a train (70%), test (20%), and validation (10%) split is available.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
### Source Data
|
| 94 |
| Subset | Source |
|
| 95 |
|-------------------|----------------------|
|
|
@@ -108,7 +132,8 @@ The data is annotated for negation cues and their scopes. Annotation guidelines
|
|
| 108 |
#### Annotation process
|
| 109 |
Each language was annotated by one native speaking annotator and follows strict annotation guidelines
|
| 110 |
|
|
|
|
| 111 |
### Citation Information
|
| 112 |
-
|
| 113 |
TBD
|
| 114 |
-
|
|
|
|
| 18 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 19 |
- [Languages](#languages)
|
| 20 |
- [Dataset Structure](#dataset-structure)
|
|
|
|
|
|
|
| 21 |
- [Data Splits](#data-splits)
|
| 22 |
- [Dataset Creation](#dataset-creation)
|
| 23 |
- [Curation Rationale](#curation-rationale)
|
|
|
|
| 82 |
- id (int): token id
|
| 83 |
- ws (boolean): indicates if the token is followed by a white space
|
| 84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
### Data Splits
|
| 86 |
For each subset a train (70%), test (20%), and validation (10%) split is available.
|
| 87 |
+
|
| 88 |
+
#### How to use this dataset
|
| 89 |
+
|
| 90 |
+
To load all data use ```'all_all'```, or specify which dataset to load as the second argument. The available configurations are
|
| 91 |
+
```'de', 'fr', 'it', 'swiss', 'fr_dalloux', 'fr_all', 'en_bioscope', 'en_sherlock', 'en_sfu', 'en_all', 'all_all'```
|
| 92 |
+
|
| 93 |
+
```
|
| 94 |
+
from datasets import load_dataset
|
| 95 |
+
|
| 96 |
+
dataset = load_dataset("rcds/MultiLegalNeg", "all_all")
|
| 97 |
+
|
| 98 |
+
dataset
|
| 99 |
+
```
|
| 100 |
+
```
|
| 101 |
+
DatasetDict({
|
| 102 |
+
train: Dataset({
|
| 103 |
+
features: ['text', 'spans', 'tokens'],
|
| 104 |
+
num_rows: 26440
|
| 105 |
+
})
|
| 106 |
+
test: Dataset({
|
| 107 |
+
features: ['text', 'spans', 'tokens'],
|
| 108 |
+
num_rows: 7593
|
| 109 |
+
})
|
| 110 |
+
validation: Dataset({
|
| 111 |
+
features: ['text', 'spans', 'tokens'],
|
| 112 |
+
num_rows: 4053
|
| 113 |
+
})
|
| 114 |
+
})
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
### Source Data
|
| 118 |
| Subset | Source |
|
| 119 |
|-------------------|----------------------|
|
|
|
|
| 132 |
#### Annotation process
|
| 133 |
Each language was annotated by one native speaking annotator and follows strict annotation guidelines
|
| 134 |
|
| 135 |
+
|
| 136 |
### Citation Information
|
| 137 |
+
```
|
| 138 |
TBD
|
| 139 |
+
```
|