BoCorpus / README.md
kaldan's picture
Update dataset card README
8356b4b verified
---
tags:
- tibetan
- classical-tibetan
- buddhist-texts
- corpus
- openpecha
license: mit
language:
- bo
datasets_info:
- config_name: default
features:
- name: id
dtype: string
- name: collection
dtype: string
- name: filename
dtype: string
- name: text
dtype: string
- name: char_count
dtype: int64
---
# BoCorpus
A comprehensive Tibetan corpus dataset for language model training and NLP research.
## Dataset Description
BoCorpus is a curated collection of classical Tibetan texts compiled from multiple digital collections. The dataset is designed for training language models and conducting research in Tibetan natural language processing.
### Collections Included
The corpus contains texts from the following collections:
- **Bon Kangyur**: 151 texts
- **Derge Kangyur**: 103 texts
- **Derge Tengyur**: 213 texts
- **DharmaEbook**: 98 texts
- **Pagen Project**: 1 texts
- **Tsadra Collection**: 266 texts
- **འབྲི་ལུགས་བང་མཛོད་སྐོར་ལྔ།**: 136 texts
- **རིན་ཆེན་གཏེར་མཛོད་ཆེན་མོ།**: 71 texts
### Data Statistics
- **Total records**: 1039
- **Total characters**: 603,325,999
- **Average characters per text**: 580,679
## Dataset Schema
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Unique UUID4 identifier for each record |
| `collection` | string | Name of the source collection |
| `filename` | string | Original filename (without extension) |
| `text` | string | Full text content with all line breaks removed |
| `char_count` | int64 | Total number of characters in the text |
## Usage
### Loading with HuggingFace Datasets
```python
from datasets import load_dataset
dataset = load_dataset("openpecha/BoCorpus", split="train")
# Access a single example
example = dataset[0]
print(f"Collection: {example['collection']}")
print(f"Characters: {example['char_count']}")
print(f"Text preview: {example['text'][:100]}...")
```
### Loading with Pandas
```python
import pandas as pd
df = pd.read_parquet("bo_corpus.parquet")
print(df.head())
```
### Loading with PyArrow
```python
import pyarrow.parquet as pq
table = pq.read_table("bo_corpus.parquet")
df = table.to_pandas()
```
## Data Preparation
The texts in this dataset have undergone the following preprocessing:
1. **Newline removal**: All newline characters (`\n`) are removed to create continuous text strings
2. **UUID assignment**: Each text receives a unique UUID4 identifier
3. **Character counting**: Total character count is computed for each text
4. **Collection tagging**: Each record is tagged with its source collection name
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{bocorpus,
title = {BoCorpus: A Tibetan Text Corpus},
author = {OpenPecha},
year = {2024},
url = {https://huggingface.co/openpecha/BoCorpus}
}
```
## License
This dataset is released under the MIT License.
## Acknowledgments
This corpus was prepared by [OpenPecha](https://openpecha.org) as part of their mission to make Tibetan Buddhist texts accessible for digital research and AI applications.