Improve dataset card: Add metadata, paper/code links, and sample usage

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +82 -24
README.md CHANGED
@@ -1,15 +1,36 @@
1
  ---
2
  configs:
3
- - config_name: IndicParam
4
- data_files:
5
- - path: data*
6
- split: test
7
  tags:
8
  - benchmark
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  ## Dataset Card for IndicParam
12
 
 
 
13
  ### Dataset Summary
14
 
15
  IndicParam is a graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of **low- and extremely low-resource Indic languages**.
@@ -127,34 +148,63 @@ Each language’s questions are drawn from its respective UGC-NET language paper
127
 
128
  ### Source and Collection
129
 
130
- - **Source**: Official UGC-NET language question papers and answer keys, downloaded from the UGC-NET/NTA website.
131
- - **Scope**: Multiple exam sessions and years, covering language/literature and linguistics papers for each of the 11 languages plus the Sanskrit–English code-mixed set.
132
- - **Extraction**:
133
- - Machine-readable PDFs are parsed directly.
134
- - Non-selectable PDFs are processed using OCR.
135
- - All text is normalized while preserving the original script and content.
136
 
137
 
138
  ### Annotation
139
 
140
  In addition to the raw MCQs, each question is annotated by question type (described in detail in the paper):
141
 
142
- - **Question type**:
143
- - Multiple-choice, Assertion–Reason, List Matching, Fill in the blanks, Identify incorrect statement, Ordering.
144
 
145
  These annotations support fine-grained analysis of model behavior across **knowledge vs. language ability** and **question format**.
146
 
147
  ---
148
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
  ## Considerations for Using the Data
150
 
151
  ### Social Impact
152
 
153
  IndicParam is designed to:
154
 
155
- - Enable rigorous evaluation of LLMs on **under-represented Indic languages** with substantial speaker populations but very limited web presence.
156
- - Encourage **culturally grounded** AI systems that perform robustly on Indic scripts and linguistic phenomena.
157
- - Highlight the performance gaps between high-resource and low-/extremely low-resource Indic languages, informing future pretraining and data collection efforts.
158
 
159
  Users should be aware that the content is drawn from **academic examinations**, and may over-represent formal, exam-style language relative to everyday usage.
160
 
@@ -162,14 +212,14 @@ Users should be aware that the content is drawn from **academic examinations**,
162
 
163
  To align with the paper and allow consistent comparison:
164
 
165
- 1. **Task**: Treat each instance as a multiple-choice QA item with four options.
166
- 2. **Input format**: Present `question_text` plus the four options (`A–D`) to the model.
167
- 3. **Required output**: A single option label (`A`, `B`, `C`, or `D`), with no explanation.
168
- 4. **Decoding**: Use **greedy decoding / temperature = 0 / `do_sample = False`** to ensure deterministic outputs.
169
- 5. **Metric**: Compute **accuracy** based on exact match between predicted option and `correct_answer` (case-insensitive after mapping to A–D).
170
- 6. **Analysis**:
171
- - Report **overall accuracy**.
172
- - Break down results **per language**.
173
 
174
  ---
175
 
@@ -180,6 +230,14 @@ To align with the paper and allow consistent comparison:
180
  If you use IndicParam in your research, please cite:
181
 
182
  ```bibtex
 
 
 
 
 
 
 
 
183
  }
184
  ```
185
 
@@ -187,7 +245,7 @@ For related Hindi-only evaluation and question-type taxonomy, please also see an
187
 
188
  ### License
189
 
190
- IndicParam is released for **non-commercial research and evaluation**.
191
 
192
  ### Acknowledgments
193
 
 
1
  ---
2
  configs:
3
+ - config_name: IndicParam
4
+ data_files:
5
+ - path: data*
6
+ split: test
7
  tags:
8
  - benchmark
9
+ - low-resource
10
+ - indic-languages
11
+ task_categories:
12
+ - question-answering
13
+ - text-classification
14
+ license: cc-by-nc-4.0
15
+ language:
16
+ - npi
17
+ - guj
18
+ - mar
19
+ - ory
20
+ - doi
21
+ - mai
22
+ - raj
23
+ - san
24
+ - brx
25
+ - sat
26
+ - gom
27
+ - en
28
  ---
29
 
30
  ## Dataset Card for IndicParam
31
 
32
+ [Paper](https://huggingface.co/papers/2512.00333) | [Code](https://github.com/ayushbits/IndicParam)
33
+
34
  ### Dataset Summary
35
 
36
  IndicParam is a graduate-level benchmark designed to evaluate Large Language Models (LLMs) on their understanding of **low- and extremely low-resource Indic languages**.
 
148
 
149
  ### Source and Collection
150
 
151
+ - **Source**: Official UGC-NET language question papers and answer keys, downloaded from the UGC-NET/NTA website.
152
+ - **Scope**: Multiple exam sessions and years, covering language/literature and linguistics papers for each of the 11 languages plus the Sanskrit–English code-mixed set.
153
+ - **Extraction**:
154
+ - Machine-readable PDFs are parsed directly.
155
+ - Non-selectable PDFs are processed using OCR.
156
+ - All text is normalized while preserving the original script and content.
157
 
158
 
159
  ### Annotation
160
 
161
  In addition to the raw MCQs, each question is annotated by question type (described in detail in the paper):
162
 
163
+ - **Question type**:
164
+ - Multiple-choice, Assertion–Reason, List Matching, Fill in the blanks, Identify incorrect statement, Ordering.
165
 
166
  These annotations support fine-grained analysis of model behavior across **knowledge vs. language ability** and **question format**.
167
 
168
  ---
169
 
170
+ ## Sample Usage
171
+
172
+ The GitHub repository provides several Python scripts to evaluate models on the IndicParam dataset. You can adapt these scripts for your specific use case.
173
+
174
+ Typical usage pattern, as described in the GitHub README:
175
+
176
+ - **Prepare environment**: Install Python dependencies (see `requirements.txt` if present in the GitHub repository) and configure any required API keys or model caches.
177
+ - **Run evaluation**: Invoke one of the scripts with your chosen model configuration and an output directory; the scripts will:
178
+ - Load `data.csv`
179
+ - Construct language-aware MCQ prompts
180
+ - Record model predictions and compute accuracy
181
+
182
+ Example scripts available in the [GitHub repository](https://github.com/ayushbits/IndicParam):
183
+ - `evaluate_open_models.py`: Example script to evaluate open-weight Hugging Face models on IndicParam.
184
+ - `evaluate_gpt_oss.py`: Example script to run the GPT-OSS-120B model on the same data.
185
+ - `evaluate_openrouter.py`: Example script to benchmark closed models via the OpenRouter API.
186
+
187
+ Script-level arguments and options are documented via the `-h`/`--help` flags within each script.
188
+
189
+ ```bash
190
+ # Example of running evaluation with an open-weight model:
191
+ python evaluate_open_models.py --model_name_or_path google/gemma-2b --output_dir results/gemma-2b
192
+
193
+ # Example of running evaluation with GPT-OSS:
194
+ python evaluate_gpt_oss.py --model_name_or_path openai/gpt-oss-120b --output_dir results/gpt-oss-120b
195
+ ```
196
+
197
+ ---
198
+
199
  ## Considerations for Using the Data
200
 
201
  ### Social Impact
202
 
203
  IndicParam is designed to:
204
 
205
+ - Enable rigorous evaluation of LLMs on **under-represented Indic languages** with substantial speaker populations but very limited web presence.
206
+ - Encourage **culturally grounded** AI systems that perform robustly on Indic scripts and linguistic phenomena.
207
+ - Highlight the performance gaps between high-resource and low-/extremely low-resource Indic languages, informing future pretraining and data collection efforts.
208
 
209
  Users should be aware that the content is drawn from **academic examinations**, and may over-represent formal, exam-style language relative to everyday usage.
210
 
 
212
 
213
  To align with the paper and allow consistent comparison:
214
 
215
+ 1. **Task**: Treat each instance as a multiple-choice QA item with four options.
216
+ 2. **Input format**: Present `question_text` plus the four options (`A–D`) to the model.
217
+ 3. **Required output**: A single option label (`A`, `B`, `C`, or `D`), with no explanation.
218
+ 4. **Decoding**: Use **greedy decoding / temperature = 0 / `do_sample = False`** to ensure deterministic outputs.
219
+ 5. **Metric**: Compute **accuracy** based on exact match between predicted option and `correct_answer` (case-insensitive after mapping to A–D).
220
+ 6. **Analysis**:
221
+ - Report **overall accuracy**.
222
+ - Break down results **per language**.
223
 
224
  ---
225
 
 
230
  If you use IndicParam in your research, please cite:
231
 
232
  ```bibtex
233
+ @misc{maheshwari2025indicparambenchmarkevaluatellms,
234
+ title={IndicParam: Benchmark to evaluate LLMs on low-resource Indic Languages},
235
+ author={Ayush Maheshwari and Kaushal Sharma and Vivek Patel and Aditya Maheshwari},
236
+ year={2025},
237
+ eprint={2512.00333},
238
+ archivePrefix={arXiv},
239
+ primaryClass={cs.CL},
240
+ url={https://arxiv.org/abs/2512.00333},
241
  }
242
  ```
243
 
 
245
 
246
  ### License
247
 
248
+ IndicParam is released for **non-commercial research and evaluation** under the [CC-BY-NC-4.0 License](https://creativecommons.org/licenses/by-nc/4.0/).
249
 
250
  ### Acknowledgments
251