pszemraj's picture
Super-squash branch 'main' using huggingface_hub
edc75b1 verified
metadata
license: other
size_categories:
  - 1M<n<10M
source_datasets: togethercomputer/Long-Data-Collections
task_categories:
  - text-generation
  - fill-mask
  - feature-extraction
configs:
  - config_name: cleaned
    data_files:
      - split: train
        path: cleaned/train-*
  - config_name: cleaned-dedup
    data_files:
      - split: train
        path: cleaned-dedup/train-*
  - config_name: cleaned-dedup-en
    data_files:
      - split: train
        path: cleaned-dedup-en/train-*
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  - config_name: cleaned
    features:
      - name: text
        dtype: string
      - name: meta
        dtype: string
    splits:
      - name: train
        num_bytes: 16969436991
        num_examples: 2759555
    download_size: 9521997027
    dataset_size: 16969436991
  - config_name: cleaned-dedup
    features:
      - name: text
        dtype: string
      - name: meta
        dtype: string
    splits:
      - name: train
        num_bytes: 13009681081
        num_examples: 2712907
    download_size: 7319241627
    dataset_size: 13009681081
  - config_name: cleaned-dedup-en
    features:
      - name: text
        dtype: string
      - name: meta
        dtype: string
    splits:
      - name: train
        num_bytes: 12723856310.202166
        num_examples: 2653304
    download_size: 7180653999
    dataset_size: 12723856310.202166
  - config_name: default
    features:
      - name: text
        dtype: string
      - name: meta
        dtype: string
    splits:
      - name: train
        num_bytes: 16821991568.354612
        num_examples: 2759555
    download_size: 9685120636
    dataset_size: 16821991568.354612
tags:
  - long boi

Dataset Card for "Long-Data-Col-rp_pile_pretrain"

This dataset is a subset of togethercomputer/Long-Data-Collections, namely the rp_sub.jsonl.zst and pile_sub.jsonl.zst files from the pretrain split.

Like the source dataset, we do not attempt to modify/change licenses of underlying data. Refer to the source dataset (and its source datasets) for details.

changes

  1. as this is supposed to be a "long text dataset", we drop all rows where text contains <= 250 characters. This drops approx 100k rows from the raw data. Resulting stats are below.
text_len
count 2.75956e+06
mean 6195.11
std 56364.9
min 251
25% 1102
50% 2147
75% 4762
max 4.66452e+07