Datasets:
more details
Browse files
README.md
CHANGED
|
@@ -88,36 +88,85 @@ configs:
|
|
| 88 |
|
| 89 |
|
| 90 |
|
| 91 |
-
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/
|
| 92 |
|
| 93 |
# Turkish Treebank Benchmarking
|
| 94 |
This is the repo for Turkish treebank benchmarking, namely evaluating Tranformer models on POS-DEP-MORPH task.
|
| 95 |
-
For the data, we used two
|
| 96 |
|
| 97 |
Here are treebank sizes at a glance:
|
| 98 |
|
| 99 |
-
| Dataset | train
|
| 100 |
|---|---|---|---|
|
| 101 |
-
| BOUN |
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|===============|==============|============|===========|
|
| 105 |
|
| 106 |
|
| 107 |
|
| 108 |
A typical instance from the dataset looks like:
|
| 109 |
|
| 110 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
```
|
| 112 |
|
| 113 |
## Benchmarking
|
|
|
|
|
|
|
| 114 |
|
| 115 |
|
| 116 |
|
| 117 |
-
Here are the benchmarking results for BERTurk:
|
| 118 |
-
|
| 119 |
-
### Test results (BOUN vs IMST)
|
| 120 |
-
|
| 121 |
| Metric | BOUN | IMST |
|
| 122 |
|---|---:|---:|
|
| 123 |
| pos_acc | 0.9263 | 0.9377 |
|
|
|
|
| 88 |
|
| 89 |
|
| 90 |
|
| 91 |
+
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/TreeBench.png" width="30%" height="30%">
|
| 92 |
|
| 93 |
# Turkish Treebank Benchmarking
|
| 94 |
This is the repo for Turkish treebank benchmarking, namely evaluating Tranformer models on POS-DEP-MORPH task.
|
| 95 |
+
For the data, we used two treebank, [IMST](https://github.com/UniversalDependencies/UD_Turkish-IMST) and [BOUN](https://github.com/UniversalDependencies/UD_Turkish-BOUN). We converted conllu format to json lines for being compatible to HF dataset formats.
|
| 96 |
|
| 97 |
Here are treebank sizes at a glance:
|
| 98 |
|
| 99 |
+
| Dataset | train lines | dev lines | test lines|
|
| 100 |
|---|---|---|---|
|
| 101 |
+
| BOUN | 7803 | 979 | 979 |
|
| 102 |
+
| IMST | 3435 | 1100 | 1100 |
|
| 103 |
+
|
|
|
|
| 104 |
|
| 105 |
|
| 106 |
|
| 107 |
A typical instance from the dataset looks like:
|
| 108 |
|
| 109 |
```
|
| 110 |
+
{
|
| 111 |
+
"id": "ins_1267",
|
| 112 |
+
"tokens": [
|
| 113 |
+
"Rüzgâr",
|
| 114 |
+
"yine",
|
| 115 |
+
"güçlü",
|
| 116 |
+
"esiyor",
|
| 117 |
+
"du",
|
| 118 |
+
"."
|
| 119 |
+
],
|
| 120 |
+
"upos": [
|
| 121 |
+
"NOUN",
|
| 122 |
+
"ADV",
|
| 123 |
+
"ADV",
|
| 124 |
+
"VERB",
|
| 125 |
+
"AUX",
|
| 126 |
+
"PUNCT"
|
| 127 |
+
],
|
| 128 |
+
"heads": [
|
| 129 |
+
4,
|
| 130 |
+
4,
|
| 131 |
+
4,
|
| 132 |
+
0,
|
| 133 |
+
4,
|
| 134 |
+
4
|
| 135 |
+
],
|
| 136 |
+
"rels": [
|
| 137 |
+
"nsubj",
|
| 138 |
+
"advmod",
|
| 139 |
+
"advmod",
|
| 140 |
+
"root",
|
| 141 |
+
"cop",
|
| 142 |
+
"punct"
|
| 143 |
+
],
|
| 144 |
+
"feats": [
|
| 145 |
+
"Case=Nom|Number=Sing|Person=3",
|
| 146 |
+
"_",
|
| 147 |
+
"_",
|
| 148 |
+
"Aspect=Imp|Polarity=Pos|VerbForm=Part",
|
| 149 |
+
"Aspect=Perf|Evident=Fh|Number=Sing|Person=3|Tense=Past",
|
| 150 |
+
"_"
|
| 151 |
+
],
|
| 152 |
+
"text": "Rüzgâr yine güçlü esiyor du .",
|
| 153 |
+
"feats_dict_json": [
|
| 154 |
+
"{\"Case\":\"Nom\",\"Number\":\"Sing\",\"Person\":\"3\"}",
|
| 155 |
+
"{}",
|
| 156 |
+
"{}",
|
| 157 |
+
"{\"Aspect\":\"Imp\",\"Polarity\":\"Pos\",\"VerbForm\":\"Part\"}",
|
| 158 |
+
"{\"Aspect\":\"Perf\",\"Evident\":\"Fh\",\"Number\":\"Sing\",\"Person\":\"3\",\"Tense\":\"Past\"}",
|
| 159 |
+
"{}"
|
| 160 |
+
]
|
| 161 |
+
}
|
| 162 |
```
|
| 163 |
|
| 164 |
## Benchmarking
|
| 165 |
+
Benchmarking is done by scripts on accompanying [Github repo](https://github.com/turkish-nlp-suite/Treebank-Benchmarking). Please proceed to this repo for running the experiments.
|
| 166 |
+
Here are the benchmarking results for BERTurk with our scripts:
|
| 167 |
|
| 168 |
|
| 169 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 170 |
| Metric | BOUN | IMST |
|
| 171 |
|---|---:|---:|
|
| 172 |
| pos_acc | 0.9263 | 0.9377 |
|