The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
North Star: Building Christian-Grounded Language Models from Scratch on Consumer Hardware
Arthur · March 2026
Abstract
We present the North Star model family — a series of compact language models trained entirely from scratch on an Apple M4 Mac Mini (16GB unified memory), without using any pretrained foundation model. The family consists of three models: North Air 1 (124M parameters), North Star 1 (198M parameters), and Wind Arc 1.5 (198M parameters), all sharing a custom GQA Transformer architecture with SwiGLU activations, RoPE positional encodings, and RMSNorm. We describe our training pipeline, a novel layer-duplication growth strategy for expanding models without random weight initialization, and a teacher-logit caching technique that enables efficient knowledge distillation on memory-constrained hardware. All models are grounded in a Christian worldview, trained to reason from Scripture, and evaluated on theological, scientific, and coding tasks. We release all models, the tokenizer, training code, and this paper publicly.
"In the beginning was the Word, and the Word was with God, and the Word was God." — John 1:1
1. Introduction
Most language model research assumes access to large compute clusters, billions of training tokens, and pretrained checkpoints to fine-tune from. This paper asks: what can be done from scratch on a single consumer device?
We trained the North Star family on an Apple M4 Mac Mini with 16GB of unified memory. No cloud compute was used. No pretrained weights were borrowed. Every parameter was initialized from scratch and trained using custom PyTorch and MLX pipelines.
Beyond the engineering challenge, this project has a deliberate purpose: to demonstrate that AI systems can be built with explicit values — in our case, a Christian worldview. We believe Jesus Christ is Lord, that Scripture is the inspired and authoritative Word of God, and that the Gospel of Jesus Christ is the most important truth in history. These convictions are not incidental to the project — they are its foundation.
"The fear of the Lord is the beginning of wisdom." — Proverbs 9:10
The North Star models are designed to be helpful, honest, and grounded in biblical truth. They can discuss theology, science, coding, history, and everyday topics — always from a posture of intellectual humility and faithfulness to Scripture.
2. Architecture
All North Star models share a common architecture inspired by modern efficient LLMs, implemented in PyTorch for training and MLX for fine-tuning.
2.1 Core Design
| Component | Choice | Rationale |
|---|---|---|
| Attention | Grouped Query Attention (GQA) | Reduces KV cache size; efficient on unified memory |
| Activation | SwiGLU | Better gradient flow than ReLU/GELU |
| Position | Rotary Position Embeddings (RoPE) | Relative positions; extrapolates to longer contexts |
| Normalization | RMSNorm (pre-norm) | More stable than LayerNorm; faster |
| Tokenizer | SentencePiece BPE (32k vocab) | Compact, language-agnostic |
| Weight tying | Embed ↔ LM head | Reduces parameters; regularizes |
2.2 Model Configurations
| Model | Params | Layers | d_model | Heads | KV Heads | FF Hidden | Context |
|---|---|---|---|---|---|---|---|
| North Air 1 | 124M | 16 | 768 | 12 | 3 | 2048 | 512 |
| North Star 1 | 198M | 24 | 768 | 12 | 3 | 2048 | 512 |
| Wind Arc 1.5 | 198M | 24 | 768 | 12 | 3 | 2048 | 512 |
North Air 1 and North Star 1 / Wind Arc 1.5 share the same d_model and head configuration — they differ only in depth.
2.3 RoPE Configuration
We use a RoPE base frequency of 500,000 (vs. the standard 10,000), following findings from LLaMA 3 and Mistral that higher base frequencies improve context generalization. This allows the model to handle longer sequences without explicit long-context fine-tuning.
3. Tokenizer
We trained a custom SentencePiece BPE tokenizer with a vocabulary of 32,000 tokens on a corpus of English text covering theology, science, coding, mathematics, and general prose. The tokenizer uses:
- Byte-fallback for unknown characters
- Special tokens:
<pad>(0),<unk>(1),<bos>(2),<eos>(3) - No normalization stripping (preserves case and whitespace structure)
Training a custom tokenizer rather than reusing an existing one ensures the vocabulary is well-suited to our domain distribution, particularly for theological vocabulary (e.g., "atonement", "sanctification", "propitiation" tokenize as single or two-piece units rather than being fragmented).
4. Training Pipeline
4.1 Phase 0: Pretraining (North Air 1 / Wind Arc Base)
The base 124M model was pretrained using a standard causal language modeling objective on a curated corpus. Given memory constraints (16GB unified memory shared between CPU and GPU), we used:
- Batch size: 4–8 sequences of 512 tokens
- Optimizer: AdamW (lr=3e-4, weight decay=0.01)
- Framework: PyTorch with MPS (Apple GPU) backend
- Gradient clipping: 1.0
- Mixed precision: float32 (MPS does not support bfloat16 reliably at training time)
Training was done on the Apple M4's Neural Engine and GPU through the MPS backend, with careful memory management to avoid the OOM kills that plagued larger batch sizes.
4.2 Phase 1: Supervised Fine-Tuning (SFT)
After pretraining, all models were fine-tuned using supervised instruction tuning (SFT) on a curated set of 70–90 Q&A pairs covering:
- Identity: who the model is, its purpose and values
- Theology: Gospel, Trinity, salvation, grace, key biblical figures
- Science: gravity, DNA, photosynthesis, cosmology
- Mathematics: Pythagorean theorem, derivatives, probability
- Coding: binary search, sorting algorithms, data structures
- History: major historical events and figures
- Everyday wisdom: productivity, learning, stress
SFT used a masked cross-entropy loss — only answer tokens contribute to the loss, not question tokens. This is critical for instruction tuning: the model must learn to generate good answers, not merely predict the next token of the question.
loss = CE(logits, targets) * answer_mask
SFT was run with MLX on Apple Silicon, which is significantly faster than PyTorch CPU for this task. A 10-epoch SFT run over ~70 pairs (batch=8) completed in under 10 minutes.
4.3 Phase 2: Layer-Duplication Growth
To grow from 124M (16 layers) to 198M (24 layers) without random weight initialization, we developed a layer duplication strategy:
- Build a new model with the target depth (24 layers)
- Map each destination layer index to a source layer index using evenly-spaced interpolation:
src = round(dst_i * (n_src - 1) / (n_dst - 1)) - Copy all weights (attention, FFN, norms) from source to destination
This produces a 24-layer model where some middle layers are duplicated copies of their neighbors. The model is immediately coherent (not random) and can be fine-tuned from this state. Because duplicated layers are initialized to identical values, the residual stream passes through them without distortion — they start as identity-like transformations and differentiate during SFT.
This is significantly more stable than random initialization: loss starts near the SFT-converged value of the source model rather than at ln(vocab_size) ≈ 10.4.
Results:
- Source model (16L) SFT loss after training: ~0.05
- Grown model (24L) loss at step 1: ~0.8 (vs ~4.0 for random init)
- Grown model loss after 10-epoch SFT: ~0.01
4.4 Phase 3: Knowledge Distillation (Attempted)
We attempted standard knowledge distillation (KL divergence from teacher logits) to train a larger student from the 124M teacher. Key findings:
Teacher logit caching: Running the teacher forward pass at every distillation step was prohibitively slow (~4s/step on CPU). We developed a logit pre-caching strategy:
- Tokenize the entire corpus into fixed-length chunks
- Run teacher forward pass on all chunks in one GPU-accelerated batch (MPS)
- Store logits as float16 numpy arrays (~1.9GB for 58 chunks of 512 tokens)
- Distillation steps sample from the cache — no teacher inference per step
This reduced the teacher overhead from ~4s/step to ~0s/step.
Numerical stability: Raw teacher logits have large magnitudes (±30–50). KL divergence with a temperature of T=2 requires numerically stable log-softmax:
log_p(x) = x/T - max(x/T) - log(Σ exp(x/T - max(x/T)))
Without the max subtraction, exp overflows to inf.
Key challenge: With a randomly initialized student, KL divergence loss starts near ln(32000) ≈ 10.4 — the student is uniform and the teacher is peaked. Convergence from this starting point was slow and unstable on our hardware. The layer-duplication approach proved more practical for our constraints.
5. Hardware and Software
| Component | Details |
|---|---|
| Hardware | Apple Mac Mini M4, 16GB unified memory |
| OS | macOS Sequoia |
| Python | 3.14 |
| Training framework | PyTorch 2.x (pretraining, SFT), MLX (SFT, distillation) |
| Tokenizer | SentencePiece |
| Inference | Flask API server |
| UI | Next.js 15 |
Memory budget (approximate, 16GB total):
- OS + system: ~4GB
- PyTorch model (124M, float32): ~500MB
- MLX model (198M, float32): ~800MB + optimizer states ≈ 2.4GB
- Logit cache (float16): ~1.9GB
- Total peak: ~9–11GB
We found that MLX is substantially more memory-efficient than PyTorch for inference and fine-tuning on Apple Silicon, due to its lazy evaluation model and unified memory management.
6. Evaluation
We evaluated models qualitatively on five categories of prompts. Formal benchmarks (MMLU, HellaSwag, etc.) were not run — our models use a custom tokenizer and non-standard checkpoint format that would require additional adapter work.
6.1 Identity and Values
All models correctly identify themselves and articulate a Christian worldview when asked about their purpose, beliefs, and values.
North Star 1 — "Who are you?":
I am North Star 1, a compact AI assistant built to be helpful, honest, and grounded in a Christian worldview. I can help with theology, science, coding, math, history, and general knowledge.
6.2 Theology
Models demonstrate consistent theological accuracy on core doctrines.
"What is the Gospel?":
The Gospel is the good news that Jesus Christ died for our sins, was buried, and rose again on the third day. All who repent and trust in Him receive forgiveness and eternal life as a free gift by grace through faith, not by works.
This response correctly summarizes 1 Corinthians 15:3-4 and Ephesians 2:8-9.
6.3 Science
"What is gravity?":
Gravity is the attractive force between objects with mass. Newton described it as F = Gm1m2/r^2. Einstein refined this: gravity is the curvature of spacetime caused by mass. It governs orbits, tides, and the structure of the universe.
6.4 Coding
Models reliably produce correct implementations of standard algorithms (binary search, merge sort, etc.) — these were included explicitly in SFT data and are reproduced accurately.
6.5 Limitations
- Context length: 512 tokens limits multi-turn conversation depth
- Repetition: Models occasionally repeat themselves without the repetition penalty
- Hallucination: Like all LLMs, models may generate confident but incorrect statements, especially on obscure topics not covered in SFT data
- Math: Arithmetic beyond simple examples is unreliable
- Scale: These are small models. Responses are competent but not at the level of GPT-4 or Claude
7. Christian Worldview
We want to be transparent about the values embedded in these models — not hide them, but explain them.
We built these models because we believe truth matters, and because we wanted AI tools that reflect rather than contradict our faith. The secular assumption of much AI development is that values are either absent or culturally neutral. We reject that assumption. All AI systems embed values — we have simply chosen to be explicit about ours.
Our core convictions:
Jesus Christ is Lord. He is not one teacher among many, but the eternal Son of God, fully human and fully divine, who died and rose again.
Scripture is authoritative. The Bible is the inspired Word of God, sufficient for faith and practice.
Faith and reason are compatible. Newton, Faraday, Mendel, and countless others were scientists motivated by Christian faith. The universe is rational because it was created by a rational God.
The Gospel is for everyone. "For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life." (John 3:16)
We invite anyone reading this paper — researcher, developer, or curious person — to read the Gospel of John and ask God sincerely whether He is real. He answers.
8. Release
All artifacts are released publicly:
| Artifact | URL |
|---|---|
| North Air 1 (124M) | huggingface.co/arthu1/north-air-1 |
| North Star 1 (198M) | huggingface.co/arthu1/north-star-1 |
| Wind Arc 1.5 (198M) | huggingface.co/arthu1/wind-arc-1.5 |
| Shared Tokenizer | huggingface.co/arthu1/north-tokenizer |
| Chat Demo | huggingface.co/spaces/arthu1/north-star-chat |
License: Apache 2.0
9. Conclusion
We have demonstrated that it is possible to train capable language models from scratch on consumer hardware (Apple M4 Mac Mini, 16GB) in hours rather than months. Key contributions:
- Layer-duplication growth — a stable, practical method for expanding model depth without random initialization
- Teacher logit caching — enables efficient knowledge distillation on memory-constrained hardware
- MLX-based SFT — fast, memory-efficient fine-tuning on Apple Silicon
- Explicit value alignment — demonstrated that LLMs can be trained with a coherent, stated worldview
We believe the future of AI is not one where a handful of large organizations control all capable models. Small models, trained with care and purpose, running on personal hardware, can serve individuals and communities well.
"Trust in the Lord with all your heart, and do not lean on your own understanding. In all your ways acknowledge him, and he will make straight your paths." — Proverbs 3:5-6
Acknowledgements
Soli Deo Gloria — to God alone be the glory.
References
- Vaswani et al. (2017). Attention Is All You Need.
- Su et al. (2021). RoFormer: Enhanced Transformer with Rotary Position Embedding.
- Ainslie et al. (2023). GQA: Training Generalized Multi-Query Transformer Models.
- Touvron et al. (2023). LLaMA: Open and Efficient Foundation Language Models.
- Hinton et al. (2015). Distilling the Knowledge in a Neural Network.
- Apple MLX Team (2023). MLX: An array framework for Apple Silicon.
- Downloads last month
- 15