File size: 1,375 Bytes
3144f8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f9197bc
3144f8d
 
 
26206b1
 
 
 
 
f9197bc
26206b1
 
 
f9197bc
26206b1
 
 
 
3144f8d
26206b1
 
 
f9197bc
26206b1
3144f8d
26206b1
 
 
 
3144f8d
26206b1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
dataset_info:
  features:
  - name: doc_id
    dtype: string
  - name: type
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 25324509618
    num_examples: 806930
  download_size: 9419131940
  dataset_size: 25324509618
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-4.0
task_categories:
- text-generation
language:
- hi
- en
pretty_name: long_context
size_categories:
- 100K<n<1M
---

# Dataset 

This dataset was filtered from AI4BHarat dataset [sangraha](https://huggingface.co/datasets/ai4bharat/sangraha),which is the largest high-quality, cleaned Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.

This dataset contains only  Hindi as of now 

# Information 
* First this dataset is mainly for long context training 
* The minimum len is `6000` and maximum len is `3754718`

# Getting started 

For downloading the entire dataset:
```python 
from datasets import load_dataset
dataset = load_dataset("damerajee/long_context_hindi")
```

If dataset is too big you can simply stream:
```python
from datasets import load_dataset

dataset = load_dataset("damerajee/long_context_hindi",split='train',streaming=True)
```
```python 
dataset.take(2)
```