Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('imagefolder', {}), NamedSplit('test'): ('text', {})}
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Tatar Morphology Benchmark
This repository contains evaluation results for morphological analysis models trained on the Tatar Morphological Corpus.
Models Evaluated
- mBERT
- RuBERT
- DistilBERT
- LSTM
- Turkish BERT
- XLM-R
Key Results (Test Set Accuracy)
| Model | Accuracy | F1 (micro) |
|---|---|---|
| mBERT | 0.9905 | 0.9905 |
| RuBERT | 0.9861 | 0.9861 |
| DistilBERT | 0.9850 | 0.9850 |
| XLM-R | 0.9837 | 0.9837 |
| LSTM | 0.9440 | 0.9440 |
| Turkish BERT | 0.8769 | 0.8769 |
All results are based on a test set of 7999 sentences (80k training). Detailed metrics with 95% confidence intervals are available in final_results_with_ci.csv.
Contents
final_results_with_ci.csv– main metrics with confidence intervalspos_accuracy.csv– accuracy per part-of-speechmodel_comparison.png– bar chart comparisontraining_curves_combined.png– loss/accuracy curvesconfusion_matrices.png– confusion matrices for top modelsaccuracy_by_length.csv– accuracy grouped by sentence lengthtag_frequencies.csv– tag frequency statistics
Citation
If you use these results, please cite:
@misc{tatar-morph-benchmark,
author = {Arabov Mullosharaf Kurbonovich, TatarNLPWorld},
title = {Tatar Morphology Benchmark},
year = {2026},
howpublished = {Hugging Face Dataset},
url = {https://huggingface.co/datasets/TatarNLPWorld/tatar-morphology-benchmark}
}
- Downloads last month
- 63