ethanker commited on
Commit
4fe2d5b
·
verified ·
1 Parent(s): 2a9b282

Add concise experiment model card.

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: mit
4
+ datasets:
5
+ - ethanker/nanomind_1m
6
+ language:
7
+ - en
8
+ library_name: transformers
9
+ tags:
10
+ - gpt
11
+ - decoder-only
12
+ - llama
13
+ - tiny
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # nanomind-step-002000 (early experiment checkpoint)
18
+
19
+ This is an early checkpoint (step 2,000) from a small decoder-only GPT-style experiment. It is shared primarily for transparency and to help others reproduce or build upon the setup. This checkpoint is not production-ready.
20
+
21
+ ## What this is
22
+ - Model: small LLaMA-style decoder-only (RMSNorm, SwiGLU, RoPE, MQA/GQA-compatible)
23
+ - Checkpoint: step_002000 from run1
24
+ - Data: curated 1M-doc mix (English), hosted at the public dataset repo: [ethanker/nanomind_1m](https://huggingface.co/datasets/ethanker/nanomind_1m)
25
+ - Intended use: research/experimentation only
26
+
27
+ ## How it was trained (run1)
28
+ - Script: `train_run1.py` (included here) with the exact launch command in `RUN_COMMAND.txt`.
29
+ - Key settings used for run1:
30
+ - seq_len 2048, hidden_size 512, n_layers 16, n_heads 8, n_kv_heads 1
31
+ - global_batch_size 64, micro_batch_size 1, AdamW lr 1e-3, warmup 2000
32
+ - bf16 autocast, gradient clipping 1.0
33
+
34
+ ## Quick eval snapshot (for context only)
35
+ - In-domain ppl (small slice): ~1.06 (expected to be low given early-stage in-domain evaluation)
36
+ - Generations: fluent but sometimes regurgitative; this is a very early checkpoint
37
+
38
+ ## Optimizations implemented for subsequent runs
39
+ These were implemented in the training/data pipeline for future iterations (beyond this checkpoint):
40
+ - Near-duplicate filtering (MinHash+LSH) and stronger boilerplate heuristics
41
+ - Optional gradient checkpointing and torch.compile for better memory/throughput
42
+ - Periodic quick perplexity checks on a small token budget
43
+
44
+ References:
45
+ - Chinchilla compute-optimal scaling: https://arxiv.org/abs/2203.15556
46
+ - Deduplication improves LMs: https://arxiv.org/abs/2107.06499
47
+ - Dedup mitigates privacy risks: https://arxiv.org/abs/2202.06539
48
+ - FlashAttention-3: https://arxiv.org/abs/2407.08608
49
+ - YaRN long-context: https://arxiv.org/abs/2309.00071
50
+
51
+ ## Load and sample
52
+ ```python
53
+ from transformers import AutoTokenizer, LlamaForCausalLM
54
+ import torch
55
+ m = 'ethanker/nanomind-step-002000'
56
+ tok = AutoTokenizer.from_pretrained(m, use_fast=True)
57
+ model = LlamaForCausalLM.from_pretrained(m, torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32)
58
+ model.eval().to('cuda' if torch.cuda.is_available() else 'cpu')
59
+
60
+ prompt = "Once upon a time,"
61
+ inputs = tok(prompt, return_tensors='pt').to(model.device)
62
+ out = model.generate(**inputs, do_sample=True, top_p=0.9, temperature=0.8, max_new_tokens=128)
63
+ print(tok.decode(out[0], skip_special_tokens=True))
64
+ ```
65
+
66
+ ## Files
67
+ - `model.safetensors`, tokenizer/config files
68
+ - `train_run1.py` (training code snapshot)
69
+ - `RUN_COMMAND.txt` (exact command used)
70
+
71
+ ## Notes
72
+ - Early and exploratory; expect limited generalization and occasional regurgitation.
73
+ - Please prefer the referenced dataset repo and scripts for reproducibility and your own experiments.