|
|
--- |
|
|
license: cc-by-4.0 |
|
|
language: [or] |
|
|
size_categories: [1M<n<10M] |
|
|
pretty_name: ODEN-Indcorpus |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
# ODEN‑Indcorpus 📚 |
|
|
|
|
|
**ODEN‑Indcorpus** is a **3.7‑million‑line** Odia mixed text collection curated from |
|
|
fiction, dialogue, encyclopaedia, Q‑A and community writing derived from the ODEN initiative. |
|
|
After thorough normalisation and de‑duplication it serves as a robust substrate for training |
|
|
Odia‑centric **tokenizers, language models and embedding spaces**. |
|
|
|
|
|
| Split | Lines | |
|
|
|-------|-------| |
|
|
| Train | 3,373,817 | |
|
|
| Validation | 187,434 | |
|
|
| Test | 187,435 | |
|
|
| **Total** | **3,748,686** | |
|
|
|
|
|
The material ranges from conversational snippets to encyclopaedic passages and |
|
|
reflects both classical and contemporary Odia usage, ensuring vocabulary |
|
|
coverage across formal and colloquial registers. |
|
|
|
|
|
## Quick‑start |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("BBSRguy/ODEN-Indcorpus", split="train", streaming=True) |
|
|
for line in ds.take(3): |
|
|
print(line["text"]) |
|
|
``` |
|
|
|
|
|
## Intended uses |
|
|
* Training **Byte‑/SentencePiece tokenizers** optimised for Odia |
|
|
* Pre‑training or continued training of Odia‑focused **LLMs / ALMs** |
|
|
* Embedding evaluation, topic modelling, text classification baselines |
|
|
|
|
|
## Citation |
|
|
```bibtex |
|
|
@misc{oden-indcorpus-2025, |
|
|
title = {ODEN‑Indcorpus: A 3.7‑M line Odia Text Dataset}, |
|
|
author = {@BBSRguy}, |
|
|
year = 2025, |
|
|
howpublished = {\url{https://huggingface.co/datasets/BBSRguy/ODEN-Indcorpus}} |
|
|
} |
|
|
``` |
|
|
|