| | --- |
| | language: |
| | - tr |
| | license: |
| | - cc-by-sa-4.0 |
| | multilinguality: |
| | - monolingual |
| | config_names: |
| | - BOUN |
| | - IMST |
| | dataset_info: |
| | - config_name: BOUN |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | - name: tokens |
| | list: |
| | dtype: string |
| | - name: upos |
| | list: |
| | dtype: string |
| | - name: heads |
| | list: |
| | dtype: int32 |
| | - name: rels |
| | list: |
| | dtype: string |
| | - name: feats |
| | list: |
| | dtype: string |
| | - name: feats_dict_json |
| | list: |
| | dtype: string |
| | - config_name: IMST |
| | features: |
| | - name: id |
| | dtype: string |
| | - name: text |
| | dtype: string |
| | - name: tokens |
| | list: |
| | dtype: string |
| | - name: upos |
| | list: |
| | dtype: string |
| | - name: heads |
| | list: |
| | dtype: int32 |
| | - name: rels |
| | list: |
| | dtype: string |
| | - name: feats |
| | list: |
| | dtype: string |
| | - name: feats_dict_json |
| | list: |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 116892 |
| | num_examples: 3435 |
| | - name: validation |
| | num_bytes: 116892 |
| | num_examples: 1100 |
| | - name: test |
| | num_bytes: 116892 |
| | num_examples: 1100 |
| | configs: |
| | - config_name: BOUN |
| | data_files: |
| | - split: train |
| | path: BOUN/train.jsonl |
| | - split: test |
| | path: BOUN/test.jsonl |
| | - split: validation |
| | path: BOUN/dev.jsonl |
| | - config_name: IMST |
| | data_files: |
| | - split: train |
| | path: IMST/train.jsonl |
| | - split: test |
| | path: IMST/test.jsonl |
| | - split: validation |
| | path: IMST/dev.jsonl |
| | --- |
| | |
| |
|
| |
|
| | <img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/TreeBench.png" width="30%" height="30%"> |
| |
|
| | # Turkish Treebank Benchmarking |
| | This is the repo for Turkish treebank benchmarking, namely evaluating Tranformer models on POS-Dep-Morph task. |
| | For the data, we used two treebank, [IMST](https://github.com/UniversalDependencies/UD_Turkish-IMST) and [BOUN](https://github.com/UniversalDependencies/UD_Turkish-BOUN). We converted conllu format to json lines for being compatible to HF dataset formats. |
| |
|
| | Here are treebank sizes at a glance: |
| |
|
| | | Dataset | train lines | dev lines | test lines| |
| | |---|---|---|---| |
| | | BOUN | 7803 | 979 | 979 | |
| | | IMST | 3435 | 1100 | 1100 | |
| |
|
| |
|
| |
|
| |
|
| | A typical instance from the dataset looks like: |
| |
|
| | ``` |
| | { |
| | "id": "ins_1267", |
| | "tokens": [ |
| | "Rüzgâr", |
| | "yine", |
| | "güçlü", |
| | "esiyor", |
| | "du", |
| | "." |
| | ], |
| | "upos": [ |
| | "NOUN", |
| | "ADV", |
| | "ADV", |
| | "VERB", |
| | "AUX", |
| | "PUNCT" |
| | ], |
| | "heads": [ |
| | 4, |
| | 4, |
| | 4, |
| | 0, |
| | 4, |
| | 4 |
| | ], |
| | "rels": [ |
| | "nsubj", |
| | "advmod", |
| | "advmod", |
| | "root", |
| | "cop", |
| | "punct" |
| | ], |
| | "feats": [ |
| | "Case=Nom|Number=Sing|Person=3", |
| | "_", |
| | "_", |
| | "Aspect=Imp|Polarity=Pos|VerbForm=Part", |
| | "Aspect=Perf|Evident=Fh|Number=Sing|Person=3|Tense=Past", |
| | "_" |
| | ], |
| | "text": "Rüzgâr yine güçlü esiyor du .", |
| | "feats_dict_json": [ |
| | "{\"Case\":\"Nom\",\"Number\":\"Sing\",\"Person\":\"3\"}", |
| | "{}", |
| | "{}", |
| | "{\"Aspect\":\"Imp\",\"Polarity\":\"Pos\",\"VerbForm\":\"Part\"}", |
| | "{\"Aspect\":\"Perf\",\"Evident\":\"Fh\",\"Number\":\"Sing\",\"Person\":\"3\",\"Tense\":\"Past\"}", |
| | "{}" |
| | ] |
| | } |
| | ``` |
| |
|
| | ## Benchmarking |
| | Benchmarking is done by scripts on accompanying [Github repo](https://github.com/turkish-nlp-suite/Treebank-Benchmarking). Please proceed to this repo for running the experiments. |
| | Here are the benchmarking results for BERTurk with our scripts: |
| |
|
| |
|
| |
|
| | | Metric | BOUN | IMST | |
| | |---|---:|---:| |
| | | pos_acc | 0.9263 | 0.9377 | |
| | | uas | 0.8151 | 0.7680 | |
| | | las | 0.7459 | 0.6960 | |
| | | morph_Abbr_acc | 0.4657 | 0.6705 | |
| | | morph_Aspect_acc | 0.1141 | 0.1152 | |
| | | morph_Case_acc | 0.1196 | 0.0586 | |
| | | morph_Echo_acc | 0.4261 | 0.4875 | |
| | | morph_Evident_acc | 0.3072 | 0.3953 | |
| | | morph_Mood_acc | 0.0654 | 0.0651 | |
| | | morph_NumType_acc | 0.2694 | 0.2991 | |
| | | morph_Number_acc | 0.3986 | 0.4782 | |
| | | morph_Number[psor]_acc | 0.4348 | 0.2333 | |
| | | morph_Person_acc | 0.4021 | 0.4726 | |
| | | morph_Person[psor]_acc | 0.2490 | 0.0671 | |
| | | morph_Polarity_acc | 0.3350 | 0.1674 | |
| | | morph_PronType_acc | 0.1535 | 0.2680 | |
| | | morph_Reflex_acc | 0.5620 | 0.7051 | |
| | | morph_Tense_acc | 0.2149 | 0.1241 | |
| | | morph_Typo_acc | 0.5081 | — | |
| | | morph_VerbForm_acc | 0.4912 | 0.2364 | |
| | | morph_Voice_acc | 0.0201 | 0.2602 | |
| | | morph_Polite_acc | — | 0.1436 | |
| | | morph_micro_acc | 0.3076 | 0.2915 | |
| | |
| | Notes: |
| | - `—` means that metric wasn’t present in that dataset’s reported results (e.g., `morph_Typo_acc` only in BOUN; `morph_Polite_acc` only in IMST). |
| | |
| | |
| | |
| | ## Acknowledgments |
| | This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC), like most of our projects. Many thanks to TRC team once again. |
| | |