essobi's picture
Dataset card
54bc1fc verified
metadata
license: cc-by-4.0
task_categories:
  - text-generation
language:
  - en
tags:
  - dclm
  - synthetic-data
  - pretraining
  - format-aware
pretty_name: DCLM Cross-Over Source

DCLM Cross-Over Source

Subset of DCLM-Baseline selected for synthetic augmentation with format-aware prompt routing.

Selection

  • Picked every 3th shard (9313 of 27938 shards)
  • Word count filter: 50-8000
  • Per-site cap: 10,000
  • Format detection: skip prompts that duplicate native document format

Stats

Metric Value
Source docs scanned 54,947,699
Selected 54,017,165
Total words 44,119,449,000
Avg words/doc 816
Length filtered 930,534
Site capped 0
All formats native 0
Output shards 9313

Prompt Applicability

Prompt Applicable Would Skip
FAQ 53,731,426 285,739
Math 53,781,379 235,786
Table 54,012,976 4,189
Tutorial 50,824,756 3,192,409

Schema

Field Type Description
id str Stable hash
text str Document text
url str Source URL
quality_score float DCLM fastText score
word_count int Word count
apply_prompts str (JSON list) Prompts to run
skip_prompts str (JSON list) Prompts to skip
num_applicable_prompts int How many prompts apply

Usage

from datasets import load_dataset
import json

ds = load_dataset("essobi/dclm-crossover-source", split="train")

# Docs for FAQ prompt only
faq_docs = ds.filter(lambda x: "faq" in json.loads(x["apply_prompts"]))

# Docs suitable for all 4 prompts (best megadoc candidates)
full = ds.filter(lambda x: x["num_applicable_prompts"] == 4)

License

CC-BY-4.0