essobi's picture
Dataset card
f266479 verified
|
raw
history blame
1.82 kB
metadata
license: cc-by-4.0
task_categories:
  - text-generation
language:
  - en
tags:
  - dclm
  - synthetic-data
  - pretraining
  - format-aware
pretty_name: DCLM Cross-Over Source

DCLM Cross-Over Source

Subset of DCLM-Baseline selected for synthetic augmentation with format-aware prompt routing.

Selection

  • Picked every 3th shard (3 of 27938 shards)
  • Word count filter: 50-8000
  • Per-site cap: 10,000
  • Format detection: skip prompts that duplicate native document format

Stats

Metric Value
Source docs scanned 255,841
Selected 251,661
Total words 196,694,035
Avg words/doc 781
Length filtered 4,180
Site capped 0
All formats native 0
Output shards 3

Prompt Applicability

Prompt Applicable Would Skip
FAQ 250,245 1,416
Math 250,428 1,233
Table 251,632 29
Tutorial 239,300 12,361

Schema

Field Type Description
id str Stable hash
text str Document text
url str Source URL
quality_score float DCLM fastText score
word_count int Word count
apply_prompts str (JSON list) Prompts to run
skip_prompts str (JSON list) Prompts to skip
num_applicable_prompts int How many prompts apply

Usage

from datasets import load_dataset
import json

ds = load_dataset("essobi/dclm-crossover-source", split="train")

# Docs for FAQ prompt only
faq_docs = ds.filter(lambda x: "faq" in json.loads(x["apply_prompts"]))

# Docs suitable for all 4 prompts (best megadoc candidates)
full = ds.filter(lambda x: x["num_applicable_prompts"] == 4)

License

CC-BY-4.0