Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: RuntimeError
Message: Dataset scripts are no longer supported, but found GomParam-v1.py
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1167, in dataset_module_factory
raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
RuntimeError: Dataset scripts are no longer supported, but found GomParam-v1.pyNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
GomParam-v1 — First Dedicated Konkani Language Benchmark
GomParam (named after Gomantak, the ancient Sanskrit name for Goa) is the first
comprehensive benchmark designed specifically to evaluate large language models on
Konkani (ISO 639-3: kok) — a severely low-resource Indo-Aryan language spoken
by approximately 2.5 million speakers, primarily in Goa, India.
📄 Companion model: Gonyai-TEO2 — 251M parameter Konkani LLM pretrained from scratch. 📦 Companion corpus: Konkani-Books-Corpus-v2 — 86M token Konkani dataset.
Motivation
Existing Indic language benchmarks (IndicParam, MILU, IndicGenBench) contain minimal or no Konkani coverage, and those that do test world knowledge about Konkani culture rather than Konkani language ability. GomParam-v1 fills this gap by testing:
- Morphological correctness (verb conjugation, agreement)
- Syntactic competence (case marking, postpositions, participles)
- Reading comprehension in Konkani
- Cultural and pragmatic understanding (proverbs, jokes)
- Dialect robustness (Goan vs. Mangalorean Konkani)
No world knowledge is required. Every question is answerable from language understanding alone, making GomParam-v1 a fair test for any model regardless of its encyclopedic pretraining.
Dataset Structure
Modules
| Module | Items | Task | Scoring |
|---|---|---|---|
cloze |
25 | Fill-in-the-blank (4-choice) | Log-likelihood MCQ |
morphology |
20 | Verb conjugation (4-choice) | Log-likelihood MCQ |
para_qa |
12 | Paragraph comprehension (4-choice) | Log-likelihood MCQ |
jokes_sayings |
16 | Proverb/joke meaning (4-choice) | Log-likelihood MCQ |
dialect |
15 | Goan vs Mangalorean sentence pairs | Perplexity consistency |
perplexity |
30 | Held-out sentences | Bits-per-token |
| Total | 118 |
Random baseline: 25.0% for all MCQ tasks (4-choice).
Cloze Item Format
{
"id": "cloze_001",
"sentence": "तो उद्यां मुंबयीक ___ वता.",
"candidates": ["विमानान", "विमाना", "विमानाक", "विमानानी"],
"correct": 0,
"category": "case_marking",
"note": "instrumental case — travel by plane"
}
Morphology Item Format
{
"id": "morph_001",
"context": "हावें काल एक पुस्तक",
"candidates": ["वाचलें", "वाचलो", "वाचली", "वाचतां"],
"correct": 0,
"category": "ergative_past",
"note": "1sg ergative + neuter object past"
}
Para QA Item Format
{
"id": "para_001",
"passage": "गोंय हें भारताच्या पश्चिम दर्यादेगेर...",
"question": "गोंय भारताक केन्ना मेळ्ळें?",
"candidates": ["१९४७ वर्सा", "१९६१ वर्सा", "१९५० वर्सा", "१९७१ वर्सा"],
"correct": 1,
"category": "factual_extraction"
}
Dialect Item Format
{
"id": "dialect_004",
"goan_dev": "आमी उद्यां येतलो.",
"mang_dev": "आमी फाल्यां येतलो.",
"gloss": "We will come tomorrow.",
"lexical_diff": true
}
Usage
from datasets import load_dataset
# Load individual modules
cloze = load_dataset("omdeep22/GomParam-v1", "cloze", split="test")
morph = load_dataset("omdeep22/GomParam-v1", "morphology", split="test")
para = load_dataset("omdeep22/GomParam-v1", "para_qa", split="test")
jokes = load_dataset("omdeep22/GomParam-v1", "jokes_sayings", split="test")
dialect = load_dataset("omdeep22/GomParam-v1", "dialect", split="test")
ppl_sents = load_dataset("omdeep22/GomParam-v1", "perplexity", split="test")
Evaluation (log-likelihood MCQ)
import torch
import numpy as np
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "omdeep22/Gonyai-teo2"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
model.eval()
@torch.no_grad()
def score_completion(prompt, completion):
full = prompt + " " + completion
p_ids = tokenizer.encode(prompt, return_tensors="pt")
f_ids = tokenizer.encode(full, return_tensors="pt")
if f_ids.shape[1] <= p_ids.shape[1]:
return float("-inf")
logits = model(f_ids).logits
opt_start = p_ids.shape[1]
opt_logits = logits[0, opt_start - 1:-1, :]
opt_targets = f_ids[0, opt_start:]
lp = torch.nn.functional.log_softmax(opt_logits, dim=-1)
return lp[range(len(opt_targets)), opt_targets].mean().item()
# Evaluate cloze
correct = 0
for item in cloze:
scores = [score_completion(item["sentence"].replace("___", ""), c)
for c in item["candidates"]]
if np.argmax(scores) == item["correct"]:
correct += 1
print(f"Cloze accuracy: {correct/len(cloze)*100:.2f}%")
Benchmark Results (GomParam-v1)
Results from the original paper evaluation. All models evaluated with 0-shot log-likelihood MCQ. Higher is better for all columns except PPL (lower is better).
| Model | Params | Training | PPL↓ | Cloze | Morph | Para QA | Joke/Say | Dialect | Composite |
|---|---|---|---|---|---|---|---|---|---|
| Random Baseline | — | — | — | 25.0% | 25.0% | 25.0% | 25.0% | — | 25.0% |
| Qwen2.5-0.5B | 0.5B | Multilingual | — | 40.0% | 41.7% | 83.3% | 12.5% | 79.0% | 53.8% |
| Gemma-2-2B | 2B | Multilingual | — | 33.3% | 41.7% | 100% | 37.5% | 68.1% | 53.7% |
| Sarvam-1 | 2B | Indic incl. Konkani | — | 20.0% | 25.0% | 100% | 12.5% | 75.2% | 40.9% |
| Gonyai-TEO2 | 251M | Konkani only | — | 40.0% | 75.0% | 83.3% | 37.5% | 75.7% | 🏆 64.2% |
Key finding: Gonyai-TEO2 (251M parameters, Konkani-only pretraining) achieves the highest composite score despite being 8× smaller than Sarvam-1 and Gemma-2-2B. Morphology accuracy (75%) demonstrates that dedicated monolingual pretraining confers strong grammatical competence that multilingual models cannot match at equivalent scale. Multilingual models retain an advantage on Para QA tasks where passage-level reading comprehension partially substitutes for language depth.
Linguistic Coverage
Script: Devanagari (primary Goan Konkani script)
Grammatical phenomena tested in Cloze & Morphology:
- Ergative-absolutive alignment (transitive past tense)
- Gender agreement (masculine / feminine / neuter)
- Number agreement (singular / plural)
- Tense-aspect (present, past, future, imperfective, pluperfect)
- Causative constructions (direct and indirect)
- Case marking (nominative, accusative, instrumental, genitive, locative)
- Postpositions and adverbial particles
- Conjunctive and temporal participles
- Relative clause pronoun resolution
- Negation scope
Dialect pairs cover:
- Lexical variation (पाणी vs उदक, शाळा vs इस्कोल, पयसे vs दुडू)
- Phonological variation (माका vs म्हाका, हावें vs हांवें)
- Dialectal synonyms for temporal adverbs (उद्यां vs फाल्यां)
Construction Methodology
All benchmark items were hand-crafted by a native Goan Konkani speaker with reference to:
- A Grammar of Konkani (Sardessai, 1986)
- Goa Konkani Akademi linguistic reference materials
- Native speaker intuition for naturalness verification
Items were designed following these principles:
- Language-only answerability — no item requires world knowledge
- Distractor plausibility — wrong options are grammatically related forms
- Register diversity — colloquial, narrative, descriptive, prescriptive
- Domain diversity — family, nature, education, culture, emotion, agriculture
Citation
If you use GomParam-v1 in your research, please cite:
@misc{borkar2026gomparam,
title = {GomParam-v1: A Benchmark for Evaluating Language Understanding in Konkani},
author = {Borkar, Omdeep},
year = {2026},
howpublished = {\url{https://huggingface.co/datasets/omdeep22/GomParam-v1}},
note = {First dedicated Konkani language benchmark. Companion to Gonyai-TEO2.}
}
Related Resources
| Resource | Link |
|---|---|
| Gonyai-TEO2 (companion model) | omdeep22/Gonyai-teo2 |
| Konkani-Books-Corpus-v2 | omdeep22/Konkani-Books-Corpus-v2 |
| Benchmark code (Kaggle) | GomParam evaluation notebook |
License
CC BY 4.0 — Free to use with attribution.
- Downloads last month
- 5