| |
|
| | --- |
| | license: mit |
| | datasets: |
| | - ethanker/nanomind_1m |
| | language: |
| | - en |
| | library_name: transformers |
| | tags: |
| | - gpt |
| | - decoder-only |
| | - llama |
| | - tiny |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | # nanomind-step-002000 (early experiment checkpoint) |
| |
|
| | This is an early checkpoint (step 2,000) from a small decoder-only GPT-style experiment. It is shared primarily for transparency and to help others reproduce or build upon the setup. This checkpoint is not production-ready. |
| |
|
| | ## What this is |
| | - Model: small LLaMA-style decoder-only (RMSNorm, SwiGLU, RoPE, MQA/GQA-compatible) |
| | - Checkpoint: step_002000 from run1 |
| | - Data: curated 1M-doc mix (English), hosted at the public dataset repo: [ethanker/nanomind_1m](https://huggingface.co/datasets/ethanker/nanomind_1m) |
| | - Intended use: research/experimentation only |
| | |
| | ## How it was trained (run1) |
| | - Script: `train_run1.py` (included here) with the exact launch command in `RUN_COMMAND.txt`. |
| | - Key settings used for run1: |
| | - seq_len 2048, hidden_size 512, n_layers 16, n_heads 8, n_kv_heads 1 |
| | - global_batch_size 64, micro_batch_size 1, AdamW lr 1e-3, warmup 2000 |
| | - bf16 autocast, gradient clipping 1.0 |
| | |
| | ## Quick eval snapshot (for context only) |
| | - In-domain ppl (small slice): ~1.06 (expected to be low given early-stage in-domain evaluation) |
| | - Generations: fluent but sometimes regurgitative; this is a very early checkpoint |
| | |
| | ## Optimizations implemented for subsequent runs |
| | These were implemented in the training/data pipeline for future iterations (beyond this checkpoint): |
| | - Near-duplicate filtering (MinHash+LSH) and stronger boilerplate heuristics |
| | - Optional gradient checkpointing and torch.compile for better memory/throughput |
| | - Periodic quick perplexity checks on a small token budget |
| | |
| | References: |
| | - Chinchilla compute-optimal scaling: https://arxiv.org/abs/2203.15556 |
| | - Deduplication improves LMs: https://arxiv.org/abs/2107.06499 |
| | - Dedup mitigates privacy risks: https://arxiv.org/abs/2202.06539 |
| | - FlashAttention-3: https://arxiv.org/abs/2407.08608 |
| | - YaRN long-context: https://arxiv.org/abs/2309.00071 |
| | |
| | ## Load and sample |
| | ```python |
| | from transformers import AutoTokenizer, LlamaForCausalLM |
| | import torch |
| | m = 'ethanker/nanomind-step-002000' |
| | tok = AutoTokenizer.from_pretrained(m, use_fast=True) |
| | model = LlamaForCausalLM.from_pretrained(m, torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32) |
| | model.eval().to('cuda' if torch.cuda.is_available() else 'cpu') |
| | |
| | prompt = "Once upon a time," |
| | inputs = tok(prompt, return_tensors='pt').to(model.device) |
| | out = model.generate(**inputs, do_sample=True, top_p=0.9, temperature=0.8, max_new_tokens=128) |
| | print(tok.decode(out[0], skip_special_tokens=True)) |
| | ``` |
| | |
| | ## Files |
| | - `model.safetensors`, tokenizer/config files |
| | - `train_run1.py` (training code snapshot) |
| | - `RUN_COMMAND.txt` (exact command used) |
| | |
| | ## Notes |
| | - Early and exploratory; expect limited generalization and occasional regurgitation. |
| | - Please prefer the referenced dataset repo and scripts for reproducibility and your own experiments. |
| | |