WillHeld commited on
Commit
b20a6e2
·
verified ·
1 Parent(s): de27093

Add dataset README

Browse files
Files changed (1) hide show
  1. README.md +49 -3
README.md CHANGED
@@ -1,3 +1,49 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ tags:
6
+ - marin
7
+ - token-counts
8
+ - pretraining
9
+ pretty_name: Marin Token Counts
10
+ ---
11
+
12
+ # Marin Token Counts
13
+
14
+ Token counts for all datasets used in [Marin](https://github.com/marin-community/marin) pretraining runs.
15
+
16
+ ## Schema
17
+
18
+ | Column | Type | Description |
19
+ |--------|------|-------------|
20
+ | `dataset` | string | Dataset identifier |
21
+ | `marin_tokens` | int | Number of tokens after tokenization |
22
+ | `category` | string | Content domain (web, code, math, academic, books, etc.) |
23
+ | `synthetic` | bool | Whether the data is LLM-generated or LLM-translated |
24
+
25
+ ## Categories
26
+
27
+ - **web** — Quality-classified Common Crawl text (Nemotron-CC)
28
+ - **code** — Source code and code-related documents
29
+ - **math** — Math-focused extractions and competition problems
30
+ - **academic** — Peer-reviewed papers and abstracts
31
+ - **reasoning** — Cross-domain reasoning and formal logic
32
+ - **books** — Digitized public domain and open access books
33
+ - **legal** — Court decisions, regulations, patents
34
+ - **government** — Parliamentary proceedings and publications
35
+ - **education** — Open educational resources and textbooks
36
+ - **encyclopedic** — Wiki-style reference content
37
+ - **forum** — Q&A sites and chat logs
38
+ - **documents** — PDF-extracted document text
39
+ - **translation** — Parallel translation corpora
40
+ - **news** — CC-licensed news articles
41
+ - **media** — Transcribed audio/video
42
+ - **supervised** — Curated task datasets
43
+ - **reference** — Niche reference sites
44
+ - **general** — General-domain content
45
+
46
+ ## Updates
47
+
48
+ This dataset is updated by running `experiments/count_tokens.py` from the Marin repo,
49
+ which reads tokenized dataset stats from GCS and pushes the results here.