The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Dataset Card for Sanskrit Classic Corpus
Dataset Description
Dataset Summary
The Sanskrit Classic Corpus is a curated collection of classical Sanskrit texts sourced from ancient Indian literature, including epics, philosophical treatises, and poetic works. This dataset is designed to support natural language processing (NLP) tasks such as language modeling, translation, text generation, and linguistic analysis for Sanskrit, one of the world's oldest Indo-European languages. It contains approximately 150,000 lines of tokenized Sanskrit text, emphasizing Devanagari script and traditional orthography.
This dataset aims to preserve and democratize access to Sanskrit heritage while enabling AI research in low-resource languages.
Languages
- Primary: Sanskrit (Devanagari script, ISO 639-3:
san)
License
- Other: CC-BY-SA 4.0 (Creative Commons Attribution-ShareAlike 4.0 International). Users are encouraged to attribute the source and share any derivative works under the same license.
Citation
If you use this dataset, please cite it as follows:
@misc{sanskrit_classic_2023,
author = {Suraj Parmar},
title = {Sanskrit Classic Corpus: A Dataset for Classical Sanskrit NLP},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/surajp/sanskrit_classic}
}
Dataset Structure
Data Instances
Each instance in the dataset is a single entry representing a verse, stanza, or short passage from classical Sanskrit texts. The structure is simple and tabular for easy loading.
Example
| text (str) | source (str) | chapter (int) |
|---|---|---|
| "ॐ नमो भगवते वासुदेवाय।" | Bhagavad Gita | 1 |
| "अथ त्वमसि।" | Chandogya Upanishad | 6 |
- text: The Sanskrit passage in Devanagari script (UTF-8 encoded).
- source: The originating text (e.g., "Ramayana", "Upanishads").
- chapter: The chapter or section number (integer, starting from 1).
Data Splits
The dataset is split into training, validation, and test sets to facilitate model development:
| Split | Examples | Percentage |
|---|---|---|
| train | 120,000 | 80% |
| validation | 15,000 | 10% |
| test | 15,000 | 10% |
Total size: ~150,000 examples.
Data Fields
text: string – The core Sanskrit text.source: string – Metadata for the literary source.chapter: int – Numerical identifier for the section.
Dataset Creation
Creation Process
- Sourcing: Texts were digitized from public domain sources, including the Digital Corpus of Sanskrit (DCS) and GRETIL (Göttingen Register of Electronic Texts in Indian Languages).
- Preprocessing:
- Tokenization using Sanskrit-specific sandhi splitter tools (e.g., based on Sanskrit Morphology Analyzer).
- Normalization: Unicode normalization to NFC, removal of diacritics where inconsistent, and filtering for verses longer than 5 words.
- Deduplication: Exact match removal using fuzzy hashing to eliminate repeats across sources.
- Splitting: Stratified split based on source distribution to ensure balanced representation of texts (e.g., 30% epics, 40% philosophy, 30% poetry).
- Validation: Manual review of 5% random samples by Sanskrit scholars for accuracy.
The dataset was compiled in 2023 by Suraj Parmar as part of efforts to build resources for Indic NLP.
Source Data
- Primary Sources:
- Mahabharata (K. M. Ganguli translation base).
- Ramayana (Valmiki).
- Upanishads (various, e.g., Brihadaranyaka).
- Total raw size: ~5 MB of plain text.
- Curation Tools: Python scripts with libraries like
indic-nlp-libraryfor transliteration checks andpandasfor structuring.
Known Issues
- Script Consistency: Some older digitizations may have minor Devanagari font variations; users should apply font normalization if needed.
- Low-Resource Bias: Over-representation of epic literature; philosophical texts like Vedanta are underrepresented.
- No Audio/OCR: Pure text only; for multimodal tasks, consider pairing with OCR datasets like Process-Venue/Sanskrit-OCR-Typed-Dataset.
Considerations for Use
Intended Use
- Primary Tasks: Pre-training language models (e.g., BERT for Sanskrit), machine translation (Sanskrit-to-English), and poetry generation.
- Secondary Tasks: Linguistic studies, such as sandhi resolution or morphological parsing.
Out-of-Scope Uses
- Commercial exploitation without attribution.
- Any application promoting misinformation about historical texts.
Ethical Considerations
- Bias: The dataset reflects traditional Sanskrit canon, which is predominantly Brahmanical; it may underrepresent folk or regional variants.
- Privacy: No personal data included, as it's all ancient public texts.
- Sustainability: Encourage contributions back to open-source Sanskrit NLP communities (e.g., via GitHub).
Reproducibility
To load the dataset in Python:
from datasets import load_dataset
dataset = load_dataset("surajp/sanskrit_classic")
print(dataset["train"][0])
For contributions or issues, open a discussion on the Hugging Face repository.
- Downloads last month
- 6