File size: 1,411 Bytes
460d629 e6d8902 460d629 e6d8902 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | ---
configs:
- config_name: all
data_files:
- path:
- all.jsonl.zst
split: train
- config_name: sample_k100
data_files:
- path:
- sample_k100.jsonl.zst
split: train
- config_name: sample_k1000
data_files:
- path:
- sample_k1000.jsonl.zst
split: train
- config_name: sample_k10000
data_files:
- path:
- sample_k10000.jsonl.zst
split: train
- config_name: sample_k200
data_files:
- path:
- sample_k200.jsonl.zst
split: train
- config_name: sample_k2000
data_files:
- path:
- sample_k2000.jsonl.zst
split: train
- config_name: sample_k20000
data_files:
- path:
- sample_k20000.jsonl.zst
split: train
- config_name: sample_k500
data_files:
- path:
- sample_k500.jsonl.zst
split: train
- config_name: sample_k5000
data_files:
- path:
- sample_k5000.jsonl.zst
split: train
- config_name: sample_k50000
data_files:
- path:
- sample_k50000.jsonl.zst
split: train
license: odc-by
task_categories:
- text-generation
- feature-extraction
language:
- en
---
# High Quality Text (Longer) Dataset
This is [agentlans/high-quality-text](https://huggingface.co/datasets/agentlans/high-quality-text)
except that only chunks between 1750 and 2250 Meta Llama 3.1 tokens were kept.
The chunks were embedded using [MongoDB/mdbr-leaf-mt](https://huggingface.co/MongoDB/mdbr-leaf-mt)
and hierarchically clustered. |