Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Tamil Morphological Generalization Benchmark (TAMIL-MORPH)
The first morphological generalization benchmark for Tamil -- 1,030 test cases across 9 categories designed to evaluate whether LLMs truly understand Tamil morphological rules or merely memorize surface forms.
Paper: "A Thousand Language Problem: Morphological Understanding in Linguistic AI"
Benchmark Overview
| Category | Test Cases | Description |
|---|---|---|
| Case Suffixes (வேற்றுமை) | 240 | 6 grammatical cases across 40 noun roots |
| Plural + Case (பன்மை) | ~160 | Plural formation with case markers |
| Verb Conjugation (வினைத்திரிபு) | ~210 | 7 person-tense combinations across verb roots |
| Sandhi (புணர்ச்சி) | ~50 | Sound changes at word boundaries |
| Honorific Forms (மரியாதை) | ~90 | Informal/formal/high-respect registers |
| Negation (எதிர்மறை) | ~90 | Present/past/future negative forms |
| Compound Words (கூட்டுச்சொல்) | ~50 | Word joining rules |
| Conditional/Causal (நிபந்தனை) | ~60 | Conditional and causal suffixes |
| Novel Combinations (புதிய வடிவங்கள்) | ~80 | Multi-suffix forms never seen in training |
| Total | 1,030 |
Baseline Results
| Model | Overall Accuracy |
|---|---|
| GPT-4o-mini | 54.0% |
Files
Benchmarkdata.md-- Full benchmark data (JSON arrays in Markdown)morph_benchmark_eval.py-- Complete evaluation script (supports local HF models, OpenAI, Google Gemini backends)baselines/gpt-4o-mini_results.json-- Detailed per-test results for GPT-4o-minikaggle_benchmark.ipynb-- Ready-to-run Kaggle notebook for benchmarkingrunpod_benchmark.py-- RunPod GPU benchmarking script
Usage
Run evaluation locally
# With OpenAI API
python morph_benchmark_eval.py --model gpt-4o-mini --backend openai
# With Google Gemini (free tier)
python morph_benchmark_eval.py --model gemini-2.0-flash --backend gemini
# With local HuggingFace model
python morph_benchmark_eval.py --model Tamil-ai/tamil-qwen25-7b-instruct --backend local
# Run all configured models
python morph_benchmark_eval.py --all
Load benchmark data programmatically
from huggingface_hub import hf_hub_download
import json, re
from pathlib import Path
path = hf_hub_download(
repo_id="Tamil-ai/tamil-morphological-benchmark",
filename="Benchmarkdata.md",
repo_type="dataset",
)
# Parse JSON blocks from the markdown (see morph_benchmark_eval.py for full parser)
Data Format
Each category contains structured JSON with roots, meanings, and expected morphological forms:
{
"root": "வீடு",
"root_meaning": "house",
"forms": {
"accusative": {"tamil": "வீட்டை", "meaning": "the house (object)"},
"dative": {"tamil": "வீட்டுக்கு", "meaning": "to the house"},
"locative": {"tamil": "வீட்டில்", "meaning": "in the house"}
}
}
Scoring
- 1.0 -- Exact match (after Tamil text normalization)
- 0.5 -- Partial match (predicted is substring of expected)
- 0.0 -- Wrong
Why This Benchmark?
Existing Tamil NLP benchmarks test translation or classification. None test whether models understand the generative morphological rules of Tamil -- an agglutinative language where a single root can produce hundreds of valid surface forms through suffix combinations.
This benchmark is transferable to other agglutinative languages (Turkish, Finnish, Hungarian, Korean, etc.) by replacing the morphological rules.
Validation
All 1,030 test cases were validated using:
- Finite State Transducer (FST) analysis
- Stanza NLP morphological parser
- Manual rule verification
Citation
@misc{tamilmorph2026,
title={A Thousand Language Problem: Morphological Understanding in Linguistic AI},
author={Tamil-AI},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/datasets/Tamil-ai/tamil-morphological-benchmark}
}
- Downloads last month
- 18