vjdevane commited on
Commit
1a3d213
·
verified ·
1 Parent(s): 4a7d2c4

Added the IndicParam Dataset

Browse files
Files changed (2) hide show
  1. README.md +185 -0
  2. data.parquet +3 -0
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Dataset Card for IndicParam
2
+
3
+ ### Dataset Summary
4
+
5
+ IndicParam is a graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of **low- and extremely low-resource Indic languages**.
6
+ The dataset contains **13,207 multiple-choice questions (MCQs)** across **11 Indic languages**, plus a separate **Sanskrit–English code-mixed** set, all sourced from official UGC-NET language question papers and answer keys.
7
+
8
+ ### Supported Tasks
9
+
10
+ - **`multiple-choice-qa`**: Evaluate LLMs on graduate-level multiple-choice question answering across low-resource Indic languages.
11
+ - **`language-understanding-evaluation`**: Assess language-specific competence (morphology, syntax, semantics, discourse) using explicitly labeled questions.
12
+ - **`general-knowledge-evaluation`**: Measure factual and domain knowledge in literature, culture, history, and related disciplines.
13
+ - **`question-type-evaluation`**: Analyze performance across MCQ formats (Normal MCQ, Assertion–Reason, List Matching, etc.).
14
+
15
+ ### Languages
16
+
17
+ IndicParam covers the following languages and one code-mixed variant:
18
+
19
+ - **Low-resource (4)**: Nepali, Gujarati, Marathi, Odia
20
+ - **Extremely low-resource (7)**: Dogri, Maithili, Rajasthani, Sanskrit, Bodo, Santali, Konkani
21
+ - **Code-mixed**: Sanskrit–English (Sans-Eng)
22
+
23
+ Scripts:
24
+
25
+ - **Devanagari**: Nepali, Marathi, Maithili, Konkani, Bodo, Dogri, Rajasthani, Sanskrit
26
+ - **Gujarati**: Gujarati
27
+ - **Odia (Orya)**: Odia
28
+ - **Ol Chiki (Olck)**: Santali
29
+
30
+ All questions are presented in the **native script** of the target language (or in code-mixed form for Sans-Eng).
31
+
32
+ ---
33
+
34
+ ## Dataset Structure
35
+
36
+ ### Data Instances
37
+
38
+ Each instance is a single MCQ from a UGC-NET language paper. An example (Maithili):
39
+
40
+ ```json
41
+ {
42
+ "unique_question_id": "782166eef1efd963b5db0e8aa42b9a6e",
43
+ "subject": "Maithili",
44
+ "exam_name": "Question Papers of NET Dec. 2012 Maithili Paper III hindi",
45
+ "paper_number": "Question Papers of NET Dec. 2012 Maithili Paper III hindi",
46
+ "question_number": 1,
47
+ "question_text": "मिथिलाभाषा रामायण' में सीताराम-विवाहक वर्णन भेल अछि -",
48
+ "option_a": "बालकाण्डमें",
49
+ "option_b": "अयोध्याकाण्डमे",
50
+ "option_c": "सुन्दरकाण्डमे",
51
+ "option_d": "उत्तरकाण्डमे",
52
+ "correct_answer": "a",
53
+ "question_type": "Normal MCQ"
54
+ }
55
+ ```
56
+
57
+ Questions span:
58
+
59
+ - **Language Understanding (LU)**: linguistics and grammar (phonology, morphology, syntax, semantics, discourse).
60
+ - **General Knowledge (GK)**: literature, authors, works, cultural concepts, history, and related factual content.
61
+
62
+ ### Data Fields
63
+
64
+ - **`unique_question_id`** *(string)*: Unique identifier for each question.
65
+ - **`subject`** *(string)*: Name of the language / subject (e.g., `Nepali`, `Maithili`, `Sanskrit`).
66
+ - **`exam_name`** *(string)*: Full exam name (UGC-NET session and subject).
67
+ - **`paper_number`** *(string)*: Paper identifier as given by UGC-NET.
68
+ - **`question_number`** *(int)*: Question index within the original paper.
69
+ - **`question_text`** *(string)*: Question text in the target language (or Sanskrit–English code-mixed).
70
+ - **`option_a`**, **`option_b`**, **`option_c`**, **`option_d`** *(string)*: Four answer options.
71
+ - **`correct_answer`** *(string)*: Correct option label (`a`, `b`, `c`, or `d`).
72
+ - **`question_type`** *(string)*: Question format, one of:
73
+ - `Normal MCQ`
74
+ - `Assertion and Reason`
75
+ - `List Matching`
76
+ - `Fill in the blanks`
77
+ - `Identify incorrect statement`
78
+ - `Ordering`
79
+
80
+ ### Data Splits
81
+
82
+ IndicParam is provided as a **single evaluation split**:
83
+
84
+ | Split | Number of Questions |
85
+ | ----- | ------------------- |
86
+ | test | 13,207 |
87
+
88
+ All rows are intended for **evaluation only** (no dedicated training/validation splits).
89
+
90
+ ---
91
+
92
+ ## Language Distribution
93
+
94
+ The benchmark follows the distribution reported in the IndicParam paper:
95
+
96
+ | Language | #Questions | Script | Code |
97
+ | ------------- | ---------- | -------- | ---- |
98
+ | Nepali | 1,038 | Devanagari | npi |
99
+ | Marathi | 1,245 | Devanagari | mar |
100
+ | Gujarati | 1,044 | Gujarati | guj |
101
+ | Odia | 577 | Orya | ory |
102
+ | Maithili | 1,286 | Devanagari | mai |
103
+ | Konkani | 1,328 | Devanagari | gom |
104
+ | Santali | 873 | Olck | sat |
105
+ | Bodo | 1,313 | Devanagari | brx |
106
+ | Dogri | 1,027 | Devanagari | doi |
107
+ | Rajasthani | 1,190 | Devanagari | – |
108
+ | Sanskrit | 1,315 | Devanagari | san |
109
+ | Sans-Eng | 971 | (code-mixed) | – |
110
+ | **Total** | **13,207** | | |
111
+
112
+ Each language’s questions are drawn from its respective UGC-NET language papers.
113
+
114
+ ---
115
+
116
+ ## Dataset Creation
117
+
118
+ ### Source and Collection
119
+
120
+ - **Source**: Official UGC-NET language question papers and answer keys, downloaded from the UGC-NET/NTA website.
121
+ - **Scope**: Multiple exam sessions and years, covering language/literature and linguistics papers for each of the 11 languages plus the Sanskrit–English code-mixed set.
122
+ - **Extraction**:
123
+ - Machine-readable PDFs are parsed directly.
124
+ - Non-selectable PDFs are processed using OCR.
125
+ - All text is normalized while preserving the original script and content.
126
+
127
+
128
+ ### Annotation
129
+
130
+ In addition to the raw MCQs, each question is annotated by question type (described in detail in the paper):
131
+
132
+ - **Question type**:
133
+ - Multiple-choice, Assertion–Reason, List Matching, Fill in the blanks, Identify incorrect statement, Ordering.
134
+
135
+ These annotations support fine-grained analysis of model behavior across **knowledge vs. language ability** and **question format**.
136
+
137
+ ---
138
+
139
+ ## Considerations for Using the Data
140
+
141
+ ### Social Impact
142
+
143
+ IndicParam is designed to:
144
+
145
+ - Enable rigorous evaluation of LLMs on **under-represented Indic languages** with substantial speaker populations but very limited web presence.
146
+ - Encourage **culturally grounded** AI systems that perform robustly on Indic scripts and linguistic phenomena.
147
+ - Highlight the performance gaps between high-resource and low-/extremely low-resource Indic languages, informing future pretraining and data collection efforts.
148
+
149
+ Users should be aware that the content is drawn from **academic examinations**, and may over-represent formal, exam-style language relative to everyday usage.
150
+
151
+ ### Evaluation Guidelines
152
+
153
+ To align with the paper and allow consistent comparison:
154
+
155
+ 1. **Task**: Treat each instance as a multiple-choice QA item with four options.
156
+ 2. **Input format**: Present `question_text` plus the four options (`A–D`) to the model.
157
+ 3. **Required output**: A single option label (`A`, `B`, `C`, or `D`), with no explanation.
158
+ 4. **Decoding**: Use **greedy decoding / temperature = 0 / `do_sample = False`** to ensure deterministic outputs.
159
+ 5. **Metric**: Compute **accuracy** based on exact match between predicted option and `correct_answer` (case-insensitive after mapping to A–D).
160
+ 6. **Analysis**:
161
+ - Report **overall accuracy**.
162
+ - Break down results **per language**.
163
+
164
+ ---
165
+
166
+ ## Additional Information
167
+
168
+ ### Citation Information
169
+
170
+ If you use IndicParam in your research, please cite:
171
+
172
+ ```bibtex
173
+ }
174
+ ```
175
+
176
+ For related Hindi-only evaluation and question-type taxonomy, please also see and cite [ParamBench](https://huggingface.co/datasets/bharatgenai/ParamBench).
177
+
178
+ ### License
179
+
180
+ IndicParam is released for **non-commercial research and evaluation**.
181
+
182
+ ### Acknowledgments
183
+
184
+ IndicParam was curated and annotated by the authors and native-speaker annotators as described in the paper.
185
+ We acknowledge UGC-NET/NTA for making examination materials publicly accessible, and the broader Indic NLP community for foundational tools and resources.
data.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:891891ece996e6f1cb43aa84cefd8e5f7ffa8aab515961b3f0413cb7e8b468a6
3
+ size 2746527