Datasets:
added bench results
Browse files
README.md
CHANGED
|
@@ -85,12 +85,16 @@ configs:
|
|
| 85 |
- split: validation
|
| 86 |
path: IMST/dev.jsonl
|
| 87 |
---
|
| 88 |
-
|
| 89 |
|
| 90 |
|
| 91 |
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/MediTurcaTextlogo.png" width="30%" height="30%">
|
| 92 |
|
| 93 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
| Dataset | train size | dev size | test size|
|
| 96 |
|---|---|---|---|
|
|
@@ -103,8 +107,47 @@ configs:
|
|
| 103 |
|
| 104 |
A typical instance from the dataset looks like:
|
| 105 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
|
| 107 |
|
| 108 |
|
| 109 |
## Acknowledgments
|
| 110 |
-
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
|
|
|
|
| 85 |
- split: validation
|
| 86 |
path: IMST/dev.jsonl
|
| 87 |
---
|
| 88 |
+
|
| 89 |
|
| 90 |
|
| 91 |
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/MediTurcaTextlogo.png" width="30%" height="30%">
|
| 92 |
|
| 93 |
+
# Turkish Treebank Benchmarking
|
| 94 |
+
This is the repo for Turkish treebank benchmarking, namely evaluating Tranformer models on POS-DEP-MORPH task.
|
| 95 |
+
For the data, we used two datasets, [IMST]() and [BOUN](). We converted conllu format to json lines for being compatible to HF dataset formats.
|
| 96 |
+
|
| 97 |
+
Here are treebank sizes at a glance:
|
| 98 |
|
| 99 |
| Dataset | train size | dev size | test size|
|
| 100 |
|---|---|---|---|
|
|
|
|
| 107 |
|
| 108 |
A typical instance from the dataset looks like:
|
| 109 |
|
| 110 |
+
```
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## Benchmarking
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
Here are the benchmarking results for BERTurk:
|
| 118 |
+
|
| 119 |
+
### Test results (BOUN vs IMST)
|
| 120 |
+
|
| 121 |
+
| Metric | BOUN | IMST |
|
| 122 |
+
|---|---:|---:|
|
| 123 |
+
| pos_acc | 0.9263 | 0.9377 |
|
| 124 |
+
| uas | 0.8151 | 0.7680 |
|
| 125 |
+
| las | 0.7459 | 0.6960 |
|
| 126 |
+
| morph_Abbr_acc | 0.4657 | 0.6705 |
|
| 127 |
+
| morph_Aspect_acc | 0.1141 | 0.1152 |
|
| 128 |
+
| morph_Case_acc | 0.1196 | 0.0586 |
|
| 129 |
+
| morph_Echo_acc | 0.4261 | 0.4875 |
|
| 130 |
+
| morph_Evident_acc | 0.3072 | 0.3953 |
|
| 131 |
+
| morph_Mood_acc | 0.0654 | 0.0651 |
|
| 132 |
+
| morph_NumType_acc | 0.2694 | 0.2991 |
|
| 133 |
+
| morph_Number_acc | 0.3986 | 0.4782 |
|
| 134 |
+
| morph_Number[psor]_acc | 0.4348 | 0.2333 |
|
| 135 |
+
| morph_Person_acc | 0.4021 | 0.4726 |
|
| 136 |
+
| morph_Person[psor]_acc | 0.2490 | 0.0671 |
|
| 137 |
+
| morph_Polarity_acc | 0.3350 | 0.1674 |
|
| 138 |
+
| morph_PronType_acc | 0.1535 | 0.2680 |
|
| 139 |
+
| morph_Reflex_acc | 0.5620 | 0.7051 |
|
| 140 |
+
| morph_Tense_acc | 0.2149 | 0.1241 |
|
| 141 |
+
| morph_Typo_acc | 0.5081 | — |
|
| 142 |
+
| morph_VerbForm_acc | 0.4912 | 0.2364 |
|
| 143 |
+
| morph_Voice_acc | 0.0201 | 0.2602 |
|
| 144 |
+
| morph_Polite_acc | — | 0.1436 |
|
| 145 |
+
| morph_micro_acc | 0.3076 | 0.2915 |
|
| 146 |
+
|
| 147 |
+
Notes:
|
| 148 |
+
- `—` means that metric wasn’t present in that dataset’s reported results (e.g., `morph_Typo_acc` only in BOUN; `morph_Polite_acc` only in IMST).
|
| 149 |
|
| 150 |
|
| 151 |
|
| 152 |
## Acknowledgments
|
| 153 |
+
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC), like most of our projects. Many thanks to TRC team once again.
|