| dataset_info: | |
| features: | |
| - name: sentence | |
| dtype: string | |
| - name: unsandhied | |
| dtype: string | |
| splits: | |
| - name: train | |
| num_bytes: 9350944 | |
| num_examples: 89323 | |
| - name: validation | |
| num_bytes: 1164083 | |
| num_examples: 10235 | |
| - name: test | |
| num_bytes: 1169683 | |
| num_examples: 9965 | |
| - name: test_500 | |
| num_bytes: 62539 | |
| num_examples: 500 | |
| - name: validation_500 | |
| num_bytes: 53738 | |
| num_examples: 500 | |
| download_size: 7114072 | |
| dataset_size: 11800987 | |
| # Dataset Card for "sanskrit-sandhi-split-hackathon" | |
| [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |