snjev310 commited on
Commit
9677f7f
·
verified ·
1 Parent(s): 496eb3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md CHANGED
@@ -104,4 +104,79 @@ configs:
104
  data_files:
105
  - split: test
106
  path: data/test-*
 
 
 
 
 
 
 
 
 
 
107
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
  data_files:
105
  - split: test
106
  path: data/test-*
107
+ task_categories:
108
+ - token-classification
109
+ language:
110
+ - hi
111
+ - anp
112
+ - bho
113
+ - mag
114
+ pretty_name: Bihari Languages UPOS Dataset (Angika, Magahi, Bhojpuri)
115
+ size_categories:
116
+ - n<1K
117
  ---
118
+
119
+ # Bihari Languages UPOS Dataset
120
+
121
+ This dataset provides Part-of-Speech (POS) tags for **Angika (anp)**, **Magahi (mag)**, and **Bhojpuri (bho)**, parallelly aligned with **Hindi (hi)**. The annotations follow the **Universal Dependencies (UD)** Universal Part-of-Speech (UPOS) standard.
122
+
123
+ This work is part of research conducted at the **Department of Computer Science and Engineering, IIT Bombay**.
124
+
125
+ ## Dataset Details
126
+
127
+ - **Languages:** Angika, Magahi, Bhojpuri, Hindi
128
+ - **Task:** Token Classification (Part-of-Speech Tagging)
129
+ - **Schema:** Universal Dependencies (UPOS)
130
+ - **Total Tags:** 18 (Standard 17 UPOS tags + 1 UNK/X)
131
+
132
+ ### Supported Tags
133
+ The dataset uses the following integer mapping for the `test` split:
134
+ `0: NOUN, 1: PUNCT, 2: ADP, 3: NUM, 4: SYM, 5: SCONJ, 6: ADJ, 7: PART, 8: DET, 9: CCONJ, 10: PROPN, 11: PRON, 12: UNK, 13: X, 14: ADV, 15: INTJ, 16: VERB, 17: AUX`
135
+
136
+
137
+
138
+ ## Institutional Credit & Support
139
+ * This research was conducted at the **Department of Computer Science and Engineering, IIT Bombay**.
140
+ * The work is supported by a Ph.D. grant from the **TCS Research Foundation** for research on extremely low-resource Indian languages.
141
+
142
+ ## 🚀 Getting Started
143
+ You can load the dataset directly using the Hugging Face `datasets` library:
144
+
145
+ ```python
146
+ from datasets import load_dataset
147
+
148
+ # Load the test split
149
+ dataset = load_dataset("snjev310/bihari-languages-upos", split="test")
150
+
151
+ # Access the first sentence in Angika
152
+ print(f"Tokens: {dataset[0]['angika_token']}")
153
+ print(f"UPOS IDs: {dataset[0]['angika_upos']}")
154
+
155
+ # Map integer IDs back to tag names
156
+ labels = dataset.features["angika_upos"].feature.names
157
+ readable_tags = [labels[i] for i in dataset[0]['angika_upos']]
158
+ print(f"UPOS Tags: {readable_tags}")
159
+ ```
160
+ ## Research & Citation
161
+
162
+ If you use this dataset in your research, please cite the following paper published in **ACL 2024**:
163
+
164
+ ```bibtex
165
+ @inproceedings{kumar-etal-2024-part,
166
+ title = "Part-of-speech Tagging for Extremely Low-resource {I}ndian Languages",
167
+ author = "Kumar, Sanjeev and
168
+ Jyothi, Preethi and
169
+ Bhattacharyya, Pushpak",
170
+ editor = "Ku, Lun-Wei and
171
+ Martins, Andre and
172
+ Srikumar, Vivek",
173
+ booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
174
+ month = aug,
175
+ year = "2024",
176
+ address = "Bangkok, Thailand",
177
+ publisher = "Association for Computational Linguistics",
178
+ url = "https://aclanthology.org/2024.findings-acl.857/",
179
+ doi = "10.18653/v1/2024.findings-acl.857",
180
+ pages = "14422--14431"
181
+ }
182
+ ```