zsprague commited on
Commit
cb542a7
·
verified ·
1 Parent(s): 7a09544

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +68 -29
README.md CHANGED
@@ -1,31 +1,70 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: condition
5
- dtype: string
6
- - name: condition_description
7
- dtype: string
8
- - name: sequence_index
9
- dtype: int64
10
- - name: text
11
- dtype: string
12
- - name: n_tokens
13
- dtype: int64
14
- - name: n_chars
15
- dtype: int64
16
- - name: n_documents
17
- dtype: int64
18
- - name: source_files
19
- dtype: string
20
- splits:
21
- - name: train
22
- num_bytes: 808396
23
- num_examples: 40
24
- download_size: 245455
25
- dataset_size: 808396
26
- configs:
27
- - config_name: default
28
- data_files:
29
- - split: train
30
- path: data/train-*
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ tags:
4
+ - prepretraining
5
+ - data-inspection
6
+ - training-samples
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
+
9
+ # prepretraining-training-samples-v1
10
+
11
+ First 10 training sequences (4096 tokens each) for each of the 4 conditions, decoded back to human-readable text. Shows exactly what the model sees during training. Use this to verify data quality, ordering, and condition differentiation.
12
+
13
+ ## Dataset Info
14
+
15
+ - **Rows**: 40
16
+ - **Columns**: 8
17
+
18
+ ## Columns
19
+
20
+ | Column | Type | Description |
21
+ |--------|------|-------------|
22
+ | condition | Value('string') | Data scheduling condition: baseline, front-load, constant-mix, or anneal |
23
+ | condition_description | Value('string') | What this condition feeds the model during its first phase |
24
+ | sequence_index | Value('int64') | Index of this sequence within the condition (0 = very first thing the model sees) |
25
+ | text | Value('string') | Decoded text of the 4096-token training sequence (human-readable) |
26
+ | n_tokens | Value('int64') | Number of tokens in this sequence (always 4096) |
27
+ | n_chars | Value('int64') | Character count of the decoded text |
28
+ | n_documents | Value('int64') | Number of documents packed into this sequence (separated by EOS tokens) |
29
+ | source_files | Value('string') | Source .npy file names (first 3) |
30
+
31
+
32
+ ## Generation Parameters
33
+
34
+ ```json
35
+ {
36
+ "script_name": "analysis/sample_training_batches.py",
37
+ "model": "N/A (data inspection, not model output)",
38
+ "description": "First 10 training sequences (4096 tokens each) for each of the 4 conditions, decoded back to human-readable text. Shows exactly what the model sees during training. Use this to verify data quality, ordering, and condition differentiation.",
39
+ "hyperparameters": {
40
+ "sequence_length": 4096,
41
+ "n_sequences_per_condition": 10,
42
+ "tokenizer": "allenai/gpt-neox-olmo-dolma-v1_5"
43
+ },
44
+ "input_datasets": [
45
+ "reasoning-degeneration-dev/prepretraining-gold-v1",
46
+ "reasoning-degeneration-dev/prepretraining-web-v1"
47
+ ],
48
+ "experiment_id": "prepretraining",
49
+ "artifact_type": "input_data",
50
+ "visualizer_type": "table",
51
+ "artifact_group": "data-inspection"
52
+ }
53
+ ```
54
+
55
+ ## Experiment Documentation
56
+
57
+ For complete experiment details, see [https://github.com/Zayne-sprague/SC-Research-Notes/tree/main/experiments/prepretraining](https://github.com/Zayne-sprague/SC-Research-Notes/tree/main/experiments/prepretraining)
58
+
59
+ ## Usage
60
+
61
+ ```python
62
+ from datasets import load_dataset
63
+
64
+ dataset = load_dataset("reasoning-degeneration-dev/prepretraining-training-samples-v1", split="train")
65
+ print(f"Loaded {len(dataset)} rows")
66
+ ```
67
+
68
+ ---
69
+
70
+ *This dataset is tracked in [reasoning-degeneration-dev/PROJECT-MANIFEST](https://huggingface.co/datasets/reasoning-degeneration-dev/PROJECT-MANIFEST)*