token-counts / README.md
WillHeld's picture
Add dataset README
b20a6e2 verified
metadata
license: apache-2.0
task_categories:
  - text-generation
tags:
  - marin
  - token-counts
  - pretraining
pretty_name: Marin Token Counts

Marin Token Counts

Token counts for all datasets used in Marin pretraining runs.

Schema

Column Type Description
dataset string Dataset identifier
marin_tokens int Number of tokens after tokenization
category string Content domain (web, code, math, academic, books, etc.)
synthetic bool Whether the data is LLM-generated or LLM-translated

Categories

  • web — Quality-classified Common Crawl text (Nemotron-CC)
  • code — Source code and code-related documents
  • math — Math-focused extractions and competition problems
  • academic — Peer-reviewed papers and abstracts
  • reasoning — Cross-domain reasoning and formal logic
  • books — Digitized public domain and open access books
  • legal — Court decisions, regulations, patents
  • government — Parliamentary proceedings and publications
  • education — Open educational resources and textbooks
  • encyclopedic — Wiki-style reference content
  • forum — Q&A sites and chat logs
  • documents — PDF-extracted document text
  • translation — Parallel translation corpora
  • news — CC-licensed news articles
  • media — Transcribed audio/video
  • supervised — Curated task datasets
  • reference — Niche reference sites
  • general — General-domain content

Updates

This dataset is updated by running experiments/count_tokens.py from the Marin repo, which reads tokenized dataset stats from GCS and pushes the results here.