File size: 1,502 Bytes
00112c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: cc-by-4.0
language: [or]
size_categories: [1M<n<10M]
pretty_name: ODEN-Indcorpus
pipeline_tag: text-generation
---

# ODEN‑Indcorpus 📚

**ODEN‑Indcorpus** is a **3.7‑million‑line** Odia mixed text collection curated from
fiction, dialogue, encyclopaedia, Q‑A and community writing derived from the ODEN initiative.  
After thorough normalisation and de‑duplication it serves as a robust substrate for training
Odia‑centric **tokenizers, language models and embedding spaces**.

| Split | Lines |
|-------|-------|
| Train | 3,373,817 |
| Validation | 187,434 |
| Test | 187,435 |
| **Total** | **3,748,686** |

The material ranges from conversational snippets to encyclopaedic passages and
reflects both classical and contemporary Odia usage, ensuring vocabulary
coverage across formal and colloquial registers.

## Quick‑start
```python
from datasets import load_dataset

ds = load_dataset("BBSRguy/ODEN-Indcorpus", split="train", streaming=True)
for line in ds.take(3):
    print(line["text"])
```

## Intended uses
* Training **Byte‑/SentencePiece tokenizers** optimised for Odia
* Pre‑training or continued training of Odia‑focused **LLMs / ALMs**
* Embedding evaluation, topic modelling, text classification baselines

## Citation
```bibtex
@misc{oden-indcorpus-2025,
  title  = {ODEN‑Indcorpus: A 3.7‑M line Odia Text Dataset},
  author = {@BBSRguy},
  year   = 2025,
  howpublished = {\url{https://huggingface.co/datasets/BBSRguy/ODEN-Indcorpus}}
}
```