agentlans's picture
Update README.md
e6d8902 verified
metadata
configs:
  - config_name: all
    data_files:
      - path:
          - all.jsonl.zst
        split: train
  - config_name: sample_k100
    data_files:
      - path:
          - sample_k100.jsonl.zst
        split: train
  - config_name: sample_k1000
    data_files:
      - path:
          - sample_k1000.jsonl.zst
        split: train
  - config_name: sample_k10000
    data_files:
      - path:
          - sample_k10000.jsonl.zst
        split: train
  - config_name: sample_k200
    data_files:
      - path:
          - sample_k200.jsonl.zst
        split: train
  - config_name: sample_k2000
    data_files:
      - path:
          - sample_k2000.jsonl.zst
        split: train
  - config_name: sample_k20000
    data_files:
      - path:
          - sample_k20000.jsonl.zst
        split: train
  - config_name: sample_k500
    data_files:
      - path:
          - sample_k500.jsonl.zst
        split: train
  - config_name: sample_k5000
    data_files:
      - path:
          - sample_k5000.jsonl.zst
        split: train
  - config_name: sample_k50000
    data_files:
      - path:
          - sample_k50000.jsonl.zst
        split: train
license: odc-by
task_categories:
  - text-generation
  - feature-extraction
language:
  - en

High Quality Text (Longer) Dataset

This is agentlans/high-quality-text except that only chunks between 1750 and 2250 Meta Llama 3.1 tokens were kept.

The chunks were embedded using MongoDB/mdbr-leaf-mt and hierarchically clustered.