DharmaBench / README.md
Intellexus
Update README.md
39f5522 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - text-classification
  - token-classification
language:
  - sa
  - bo
pretty_name: DharmaBench

Dataset Card for DharmaBench

Dataset Details

Dataset Description

DharmaBench is a multi-task benchmark suite for evaluating large language models (LLMs) on classification and detection tasks in historical Buddhist texts written in Sanskrit and Classical Tibetan.
It contains 13 tasks (6 Sanskrit, 7 Tibetan), with 4 tasks shared across both languages, designed to measure linguistic, cultural, and structural understanding in low-resource, ancient-language contexts.

The benchmark includes tasks such as metaphor and simile detection, quotation detection, verse/prose classification, metre classification, and root-text/commentary alignment. These reflect key challenges faced by philologists, historians of philosophy and religion, and digital humanities researchers studying Buddhist textual traditions. For the exact definition and description of the tasks, please see the repository or the paper.

  • Curated by: Intellexus Project (Kai Golan Hashiloni et al.)
  • Funded by: This study is supported in part by the European Research Council (Intellexus, Project No.101118558).
  • Shared by: Intellexus Project
  • Language(s): Sanskrit (sa), Classical Tibetan (bo)
  • License: CC BY 4.0

Dataset Sources

Uses

Direct Use

DharmaBench can be used to:

  • Evaluate multilingual or low-resource LLMs on culturally and linguistically rich ancient-language data.
  • Benchmark Sanskrit and Classical Tibetan performance across a variety of classification and detection tasks.
  • Support philologists and digital humanists in semi-automating annotation, quotation tracing, or commentary alignment.

Out-of-Scope Use

  • None

Dataset Structure

  • Each task is located under either Sanskrit/ or Tibetan/, with files such as train.json and test.json, based on availability.
  • Each task has a slightly different structure and column.
  • All data is standardized and formatted for text- and token-level tasks.

Dataset Creation

Curation Rationale

The dataset was created to enable systematic benchmarking of LLMs on Sanskrit and Classical Tibetan, languages central to Buddhist textual transmission yet underrepresented in NLP. It supports evaluation of linguistic understanding, structural analysis, and cultural reasoning.

Source Data

Data Collection and Processing

Texts were sourced from public-domain Buddhist corpora, including digitized canonical and commentarial materials. Data were cleaned, normalized, and manually aligned where necessary. Problematic or ambiguous samples were discussed collaboratively and excluded when consensus could not be reached.

Who are the source data producers?

Original texts were produced by Buddhist scholars between the 1st millennium BCE and 19th century CE. Open-source initiatives and Buddhist textual archives prepared digital transcriptions.

Annotations [optional]

Annotation process

Domain experts in Sanskrit and Classical Tibetan studies carried out annotations. Ambiguities and inconsistencies were discussed collaboratively, and annotation guidelines were iteratively refined. Disagreements were resolved through group discussion or by excluding samples when consensus was not possible.

Who are the annotators?

Annotators were scholars and research assistants from the Intellexus Project, with backgrounds in Buddhist studies, linguistics, and computational linguistics.

Personal and Sensitive Information

No personal or sensitive information is contained in the dataset. All texts are historical and in the public domain.

Bias, Risks, and Limitations

The dataset represents canonical and scholastic Buddhist materials and may not generalize to colloquial or modern-language use. Biases inherent in the source texts (e.g., religious, philosophical, or gender-related perspectives) are preserved to maintain their historical authenticity.

Tasks with very short textual inputs can sometimes be resolved through formal cues (e.g., punctuation, structure) rather than deep understanding.

Recommendations

Users should be made aware of the dataset's risks, biases, and limitations. Users should interpret model performance cautiously and avoid overgeneralizing results. DharmaBench is best used for comparative evaluation and fine-tuning in controlled research settings.

Citation

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Dataset Card Authors

Kai Golan Hashiloni (Intellexus Project) With contributions from the Intellexus Sanskrit and Tibetan research teams.

Dataset Card Contact

For questions or contributions: golankai@gmail.com