--- dataset_info: features: - name: tokens list: string - name: lemmas list: string - name: pos_tags list: string splits: - name: train num_bytes: 2793924 num_examples: 7803 - name: validation num_bytes: 339997 num_examples: 979 - name: test num_bytes: 336684 num_examples: 979 download_size: 1109706 dataset_size: 3470605 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- # Dataset Card for POS UD-BOUN ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Source Data](#source-data) ## Dataset Description UD_Turkish-BOUN is originally [released](https://github.com/UniversalDependencies/UD_Turkish-BOUN/tree/master) by TABILAB. ### Dataset Structure We kept the original data structure. ### Data Fields - **tokens** (list): a `list` of `string` features. - **pos_tags** (list): a `list` of part-of-speech (POS) tags, where each tag is a string such as `"NUM"`, `"_"`, `"NOUN"`, `"AUX"`, `"PUNCT"`, etc. ## Source Dataset [github.co/UniversalDependencies/UD_Turkish-BOUN](https://github.com/UniversalDependencies/UD_Turkish-BOUN/tree/master)