Stentor2-12M
🔬 Research Artifact — Not a Production Model. This model has no safety tuning and is not suitable for deployment in any user-facing application. See Intended Uses for details.
Table of Contents
- What Is This?
- The Core Design Insight: Vocabulary Efficiency
- Head-to-Head: Stentor v1 vs Stentor2
- Quick Start
- Important Limitations
- Honest Notices
- PDF Tokens & The Replacement Character
- Model Architecture — Full Specification
- The Tokenizer: TokenMonster
- Training Infrastructure
- Training Hyperparameters — Complete Reference
- The T4 Mixed-Precision Recipe — Deep Dive
- Data Pipeline
- Weight Initialization
- Evaluation & Results
- Training Dynamics
- Use Cases & Intended Uses
- Out-of-Scope Uses
- Ethical Considerations & Societal Impact
- Inference Guide
- Free Inference — Try It Now
- Real Model Responses
- Quantization
- Format Conversion
- Speculative Decoding
- Bias, Risks & Limitations
- Related Work
- Environmental Impact
- Citation
What Is This?
Stentor2-12M is the first production release from the Stentor2 model family — a ground-up redesign of the original Stentor v1 line. At ~12.3M parameters, it is a compact base language model built entirely from scratch on free-tier Kaggle compute using two NVIDIA Tesla T4 GPUs.
Like all Stentor models, this is a base next-token predictor, not a chat assistant. It will not reliably follow instructions, has no safety tuning, and is best used for research, prototyping, speculative decoding, and edge-deployment experimentation. The value of this model is not its conversational capability — it's what it represents architecturally: a dramatic efficiency gain over v1 at the same scale, achieved by fixing the root cause of v1's underperformance.
The Core Design Insight: Vocabulary Efficiency
The most consequential change in Stentor2 is the replacement of the standard Llama/Mistral 32,768-token vocabulary with a purpose-built 8,000-token English vocabulary from the TokenMonster project (english-8000-strict-nocapcode-v1, padded to 8,064 for hardware alignment).
This is not a minor tweak — it is the entire architectural story of Stentor2.
Why Vocabulary Size Matters So Much at This Scale
In a transformer language model, the embedding table has shape [vocab_size × hidden_size]. When you tie word embeddings (share the embedding and output projection weights, which Stentor does), this table appears once in the parameter count. At 12M total parameters, the fraction consumed by this table dictates how much "brain" is left over for the actual transformer layers.
Stentor-12M (v1) used a 32,768-token vocabulary. At a hidden size of 192:
embedding_params = 32,768 × 192 = 6,291,456
total_params = 12,047,040
embedding_share = 52.2%
Over half of the model was a lookup table. The transformer stack — the part that actually learns language patterns — had fewer than 6 million parameters to work with.
Stentor2-12M uses an 8,064-token vocabulary. At a hidden size of 256:
embedding_params = 8,064 × 256 = 2,064,384
total_params = 12,294,400
embedding_share = 16.8%
By shrinking the vocabulary, the embedding table was cut from 6.3M to 2.1M parameters — freeing up ~4.2M parameters redistributed into transformer depth (12 layers vs 9) and width (hidden size 256 vs 192), where they contribute directly to language modeling quality.
The result is a ~70.1% reduction in perplexity (89.01 → ~26.6) compared to Stentor-12M v1.
Head-to-Head: Stentor v1 vs Stentor2
| Property | Stentor-12M (v1) | Stentor2-12M |
|---|---|---|
| Vocabulary | 32,768 (Mistral BPE) | 8,064 (TokenMonster English) |
| Hidden Size | 192 | 256 |
| Intermediate Size | 576 | 768 |
| Num Layers | 9 | 12 |
| Attention Heads | 3 | 4 |
| Head Dimension | 64 | 64 |
| Context Length | 512 tokens | 1,024 tokens |
| Total Parameters | 12,047,040 | 12,294,400 |
| Embedding Share | 52.2% | 16.8% |
| Non-Embedding Params | ~5.76M | ~10.23M |
| Source Token Budget | 200M | 240M |
| Total Token Budget | ~200M | 480M |
| Training Time | ~1.3h | ~2.1h |
| Training Processes | 1 | 2 |
| Best Perplexity | 89.01 | 26.61 |
| Perplexity Reduction | — | ~70.1% |
| BLiMP Accuracy | — | 68.95% |
| Tokenizer | Mistral BPE | TokenMonster |
| Architecture | LlamaForCausalLM | LlamaForCausalLM |
| Training Precision | fp16 | fp16 + FP32 norms/critical layers |
🚀 Quick Start
1. Install Dependencies
pip install transformers torch safetensors huggingface_hub tokenmonster
2. Load the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"StentorLabs/Stentor2-12M",
torch_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained(
"StentorLabs/Stentor2-12M",
trust_remote_code=True,
)
3. Generate Text
input_ids = torch.tensor([tokenizer.encode("The history of computing")], dtype=torch.long).to(next(model.parameters()).device)
attention_mask = torch.ones_like(input_ids)
with torch.inference_mode():
output = model.generate(
input_ids,
attention_mask=attention_mask,
max_new_tokens=80,
do_sample=True,
temperature=1.1,
top_p=0.55,
repetition_penalty=1.15,
pad_token_id=tokenizer.pad_token_id,
)
print(tokenizer.decode(output[0].tolist()))
Why
attention_mask? The model's pad token and EOS token are the same ID. Without an explicit attention mask, HuggingFace throws a warning because it can't tell which tokens are real vs padding. Passingtorch.ones_like(input_ids)tells the model that every token in the input is real.
4. Recommended Generation Settings
| Parameter | Recommended Range | Personal Favorite | Notes |
|---|---|---|---|
temperature |
0.6 – 0.9 | 0.8 | Below 0.6 causes heavy <22> (PDF token) output; above 0.9 gets chaotic |
top_p |
0.6 – 0.8 | 0.7 | Below 0.6 also increases <22> tokens significantly |
repetition_penalty |
1.1 – 1.4 | 1.2 | Without this the model will loop; 1.2–1.3 hits the sweet spot |
max_new_tokens |
10 – 500 | — | Model stays mostly on topic but may drift at longer lengths |
⚠️ Always use
repetition_penalty ≥ 1.1. Without it, this model will fall into repetitive loops. With it on, output quality is mostly stable.
⚠️ Keep temperature and top_p above 0.6. Going below this threshold causes the model to lean heavily on PDF-derived tokens that render as
<22>(the Unicode replacement character). See PDF Tokens for details.
⚠️ Important Limitations
- Not Instruction-Tuned: This is a base model. It will often ignore prompts, continue in unexpected directions, or respond off-topic.
- No Safety Tuning: No RLHF, no constitutional AI, no content filtering.
- Limited World Knowledge: ~12M parameters cannot store meaningful world knowledge.
- Context Window: Hard limit of 1,024 tokens.
- English Only: The TokenMonster vocabulary is English-specific.
- TokenMonster Required: Uses a TokenMonster adapter. Make sure
tokenmonsteris installed (pip install tokenmonster). You do not need a custom model loader — standardAutoModelForCausalLMworks — but the tokenmonster package is still required for the tokenizer. - PDF Tokens (
<22>): The model was trained on data containing PDF-extracted text, and will sometimes generate tokens that render as<22>(the Unicode replacement character). See the PDF Tokens section below. - Repetition Without Penalty: Without
repetition_penalty, the model will fall into loops. Always userepetition_penalty ≥ 1.1. - Shared Tensor Warning: When saving or loading, you may see:
Removed shared tensor {'lm_head.weight'} while saving. This is expected from tied word embeddings and is safe to ignore.
📋 Honest Notices
These are candid, first-hand observations about how this model actually behaves.
Significantly more coherent than its predecessors. Output quality is a clear step up from Stentor2-12M-Preview and Stentor-12M v1. It is competitive with — and in some runs slightly better than — Stentor-30M, despite being less than half the parameter count.
Generates PDF tokens (
<22>). The model will sometimes output tokens that display as<22>— the Unicode replacement character. These are valid tokens from PDF-extracted training data that most systems cannot natively render, and that most language models never produce. They are not a decoding error. See PDF Tokens for details.Prone to repetition without
repetition_penalty. Left unchecked, the model loops. Withrepetition_penaltyset to 1.1–1.4, this is mostly resolved and output quality is stable.No custom model loader required. 🎉 Unlike Stentor2-12M-Preview, this model loads with standard
AutoModelForCausalLM.from_pretrained(). No special loading code needed.You still need to install
tokenmonster. The tokenizer wraps TokenMonster and requirespip install tokenmonster. It's a one-line install and nothing like the complexity of a fully custom tokenizer, but it is a required dependency.The model talks about education and academics — a lot. Trained on FineWeb-HQ (a high-quality filtered web corpus with significant PDF and educational content) and StenCore (100% PDFs), the model has a strong prior toward academic language, school systems, curriculum, research, and formal writing. Prompts unrelated to education will frequently be redirected toward educational framing anyway.
This model will usually stop on its own. Unlike other Stentor models that tend to run until they hit
max_new_tokens, Stentor2-12M will typically emit an EOS token and halt by itself — it can happen anywhere, it might be at the 20 token mark or it might be the 500 token mark. The exact stopping point can highly vary. You don't need a tight token cap to prevent runaway generation. That said, it's still recommended to set a generous ceiling (e.g.max_new_tokens=1000) rather than leaving it uncapped, just as a safety net in case the model doesn't stop on a given run.
PDF Tokens & The Replacement Character
Stentor2-12M was trained on FineWeb-HQ and StenCore, which includes a substantial amount of text extracted from PDF documents. PDFs often contain binary sequences or encoding artifacts that survive text extraction as raw bytes outside the standard UTF-8 range. These get tokenized and trained on as valid tokens.
As a result, the model has learned to generate these tokens — and will do so, especially at lower temperatures or lower top-p values. When decoded, they render as <22> (U+FFFD, the Unicode replacement character), because most systems substitute this symbol for unrepresentable byte sequences.
What this looks like:
Academics → <22><22><22><22><22><22><22><22><22><22><22><22><22><22>...
How to reduce it: Keep temperature ≥ 0.6 and top_p ≥ 0.6. Below these thresholds, PDF token probability rises sharply. At well-tuned settings (temp 0.8, top_p 0.7), <22> tokens appear occasionally but do not dominate output.
This is not a bug. Most language models are trained on clean text and never encounter these sequences. Stentor2-12M is unusual in that it can generate them — a side effect of training on real-world PDF-extracted data.
Model Architecture — Full Specification
Stentor2-12M is a LlamaForCausalLM model.
Core Configuration
| Component | Value | Derivation |
|---|---|---|
| Architecture | LlamaForCausalLM |
Hard-coded in training script |
| Hidden Size | 256 | embedding_params (2,064,384) ÷ vocab_size (8,064) = 256 ✓ |
| Intermediate Size (FFN) | 768 | Hidden × 3 (verified via total param count) |
| Num Hidden Layers | 12 | Verified via total param count formula |
| Num Attention Heads | 4 | Hidden ÷ head_dim = 256 ÷ 64 = 4 |
| Num Key/Value Heads | 4 | Full MHA (no GQA at this scale) |
| Head Dimension | 64 | Enforced by training script |
| Vocab Size | 8,064 | TokenMonster 8K base + 62 padding tokens (multiple of 128) |
| Max Position Embeddings | 1,024 | block_size default in training script |
| Hidden Activation | SiLU | LlamaForCausalLM default |
| Positional Encoding | RoPE | rope_theta = 10,000.0 |
| RMS Norm Epsilon | 1e-5 | Default in training script |
| Tie Word Embeddings | True | Shared embedding / LM head weights |
| Attention Implementation | SDPA | PyTorch Scaled Dot Product Attention |
Parameter Count Breakdown
def estimate_llama_params(vocab_size, hidden_size, intermediate_size,
num_hidden_layers, num_attention_heads, num_key_value_heads):
kv_dim = int(hidden_size * num_key_value_heads / num_attention_heads)
attn = 2 * hidden_size * hidden_size + 2 * hidden_size * kv_dim
mlp = 3 * hidden_size * intermediate_size
norm = 2 * hidden_size
total = vocab_size * hidden_size + num_hidden_layers * (attn + mlp + norm) + hidden_size
return total
Plugging in Stentor2 values:
kv_dim = 256 * 4 / 4 = 256
attn = 2×256×256 + 2×256×256 = 262,144
mlp = 3×256×768 = 589,824
norm = 2×256 = 512
per_layer = 852,480
embedding = 8,064 × 256 = 2,064,384
layers = 12 × 852,480 = 10,229,760
final_norm = 256
total = 2,064,384 + 10,229,760 + 256 = 12,294,400 ✓
| Component | Parameters | % of Total |
|---|---|---|
| Embedding Table (tied with LM Head) | 2,064,384 | 16.8% |
| Transformer Layers × 12 | 10,229,760 | 83.2% |
| — Attention (per layer × 12) | 3,145,728 | 25.6% |
| — FFN/MLP (per layer × 12) | 7,077,888 | 57.5% |
| — Layer Norms (per layer × 12) | 6,144 | 0.05% |
| Final RMS Norm | 256 | 0.002% |
| Total | 12,294,400 | 100% |
The Tokenizer: TokenMonster
Stentor2 uses a custom tokenizer adapter wrapping the TokenMonster english-8000-strict-nocapcode-v1 vocabulary.
What Is TokenMonster?
TokenMonster (alasdairforsythe/tokenmonster) is an alternative tokenization approach optimized for compact English vocabulary sizes.
Tokenizer Efficiency vs. v1
This tokenizer produces more tokens per word compared to Stentor v1. This is expected — smaller vocabulary means more tokens per word on average. The ~70.1% perplexity improvement shows the tradeoff was worth it.
Vocabulary Construction
- Base vocabulary loaded from
alasdairforsythe/tokenmonster→vocabs/english-8000-strict-nocapcode-v1.vocab - Special tokens added:
</s>(EOS),<s>(BOS),<pad>(set equal to EOS) - A default chat template injected for structural compatibility
- Vocabulary padded to the nearest multiple of 128 → 8,064 tokens
Tokenizer Configuration
{
"tokenizer_type": "tokenmonster",
"vocab_file": "tokenmonster.vocab",
"model_max_length": 1024,
"eos_token": "</s>",
"bos_token": "<s>",
"pad_token": "</s>",
"vocab_size": 8064
}
Chat Template
{% for message in messages %}
<|{{ message['role'] }}|>
{{ message['content'] }}
{% endfor %}
{% if add_generation_prompt %}<|assistant|>
{% endif %}
Training Infrastructure
Hardware
| Component | Specification |
|---|---|
| GPU Count | 2× NVIDIA Tesla T4 |
| VRAM per GPU | 15.64 GB |
| Total VRAM | ~31.3 GB |
| Active Training Processes | 2 (dual-process via HuggingFace Accelerate) |
| Platform | Kaggle Notebooks (free tier) |
| Accelerator Library | HuggingFace Accelerate |
Software Stack
| Package | Role |
|---|---|
| PyTorch | Core tensor operations and autograd |
| HuggingFace Transformers | Model architecture (LlamaForCausalLM) |
| HuggingFace Accelerate | Training loop and device management |
| HuggingFace Datasets | Data loading |
| bitsandbytes | Dual-GPU optimization |
| tokenmonster | Custom vocabulary |
| safetensors | Model serialization |
Training Hyperparameters — Complete Reference
Core Training Parameters
| Hyperparameter | Value | Notes |
|---|---|---|
learning_rate |
8e-4 | AdamW LR (script default) |
weight_decay |
0.01 | Applied to non-embedding, non-norm, non-bias params |
max_grad_norm |
1.0 | Gradient clipping threshold |
optimizer |
AdamW | With betas=(0.9, 0.95), eps=1e-8 |
scheduler |
Cosine | Cosine decay with linear warmup |
warmup_ratio |
0.05 | → 477 warmup steps |
stable_ratio |
0.80 | → 7,641 stable steps |
source_token_budget |
240,000,000 | Source data token cap |
token_budget |
480,000,000 | Total training tokens; hit at ~3,662 steps |
max_train_steps |
9,552 | Configured limit; token budget hit first |
seed |
42 | Reproducibility seed |
mixed_precision |
fp16 | All activations/gradients in FP16 |
Batch & Sequence Parameters
| Hyperparameter | Value | Notes |
|---|---|---|
per_device_train_batch_size |
32 | Per GPU per gradient accumulation step |
per_device_eval_batch_size |
16 | Evaluation batch size |
gradient_accumulation_steps |
2 | Effective optimizer steps every 2 forward passes |
total_batch_size |
128 | per_device × processes × grad_accum = 32×2×2 |
block_size |
1,024 | Sequence length; training packed to this size |
tokens_per_optimizer_step |
131,072 | total_batch_size × block_size = 128×1024 |
num_train_epochs |
2 | Both epochs completed before token budget |
Evaluation & Checkpointing
| Hyperparameter | Value |
|---|---|
eval_steps |
900 |
best_eval_steps |
900 |
best_eval_start_step |
1,500 |
save_every_minutes |
30 |
save_total_limit |
2 |
logging_steps |
300 |
max_eval_samples |
5,000 |
AdamW Optimizer — Detailed
- Decay group: All
nn.Linearweight matrices →weight_decay = 0.01 - No-decay group: Bias terms, normalization parameters, embedding parameters →
weight_decay = 0.0 - Betas:
(0.9, 0.95) - Epsilon:
1e-8 - Fused kernel: Enabled when CUDA is available
Learning Rate Schedule
Phase 1 — Warmup (steps 0–477):
LR ramps linearly from 0 → 8e-4
Phase 2 — Cosine Decay (steps 477–9,552):
LR follows cosine curve from 8e-4 → 0
(Training ended early at ~3,662 steps due to token budget)
The T4 Mixed-Precision Recipe — Deep Dive
The training pipeline uses a custom T4 Mixed-Precision Recipe designed for stable fp16 training on NVIDIA Tesla T4 GPUs.
1. FP32 Normalization Layers (25 modules)
All RMSNorm modules are monkey-patched to run in FP32 regardless of input dtype:
def _fp32_norm_forward(hidden_states, *args, **kwargs):
input_dtype = hidden_states.dtype
output = original_forward(hidden_states.float().contiguous(), *args, **kwargs)
return output.clone().to(input_dtype)
The .clone() call prevents returning graph-managed buffers that can be overwritten. The .contiguous() prevents strided-tensor issues in FP32 norm ops.
Count: 12 layers × 2 norms each (input + post-attention) + 1 final norm = 25 modules total.
2. FP32 Critical Layers (2 layers)
The first 2 transformer layers are designated as critical and run entirely in FP32:
- Weights cast to
.float()at setup time forward()monkey-patched to cast all inputs to FP32 and outputs back to original dtypetorch.amp.autocast("cuda", enabled=False)prevents re-downcasting inside the layer
Rationale: The first layers handle embedding projection and initial feature extraction. Instability here corrupts the entire forward pass. Running these in FP32 provides a stability floor at minimal compute cost.
3. FP32 Attention Softmax — Skipped
The FP32 softmax wrapper was not applied (0 modules). As logged during training:
Stability tricks: skipping FP32 softmax wrapper because
SDPA/FlashAttention kernels require fp16/bf16 inputs for fast paths.
PyTorch's SDPA implementation handles numerical stability internally and requires fp16/bf16 inputs for its optimized code paths. Wrapping softmax in FP32 would bypass these fast kernels.
T4 Recipe Summary
| Technique | Count | Scope |
|---|---|---|
| FP32 norm modules | 25 | All RMSNorm layers |
| FP32 critical layers | 2 | First 2 transformer layers |
| FP32 softmax modules | 0 | Skipped — SDPA incompatible |
Data Pipeline
Dataset
The model was trained on FineWeb-HQ and StenCore (epfml/FineWeb-HQ) and (StentorLabs/StenCore)— high-quality filtered web and PDF corpuses.
Total tokens processed: 480,116,736 (~480M, budget-limited run) Source tokens (raw data): 240,000,000 (240M source token budget)
Text Preprocessing
def clean_text(text: str) -> str:
text = unicodedata.normalize("NFKC", text)
lines = [line.strip() for line in text.splitlines() if line.strip()]
text = " ".join(lines)
text = " ".join(text.split())
return text
- NFKC normalization maps visually equivalent Unicode characters to canonical form
- Whitespace collapse ensures consistent tokenization
Sequence Packing
After tokenization, samples are packed into fixed 1,024-token blocks. Labels for packed sequences are identical to input_ids (causal LM). No special boundary masking between packed samples.
Weight Initialization
def initialize_weights(model, std=0.02):
for module in model.modules():
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None:
module.bias.data.zero_()
elif "rmsnorm" in type(module).__name__.lower():
if module.weight is not None:
module.weight.data.fill_(1.0)
if module.bias is not None:
module.bias.data.zero_()
- Linear/Embedding layers: normal(0, 0.02)
- RMSNorm scale weights: 1.0 (identity at start)
- Biases: zero
Evaluation & Results
Training Curves
Raw data available in
training_curves.csv— columns:step,epoch,train_loss,eval_loss,eval_ppl,note.
Results Summary
| Checkpoint | Step | Eval Loss | Perplexity |
|---|---|---|---|
| Early | 900 | — | — |
| First best | 1,500 | 3.5410 | 34.50 |
| Second best | 2,400 | 3.3743 | 29.20 |
| Epoch 0 end | ~2,350 | 3.3776 | 29.30 |
| Best Checkpoint | 3,300 | 3.2814 | 26.61 |
| Epoch 1 end (final) | ~3,662 | 3.2565 | 25.96 |
Comparison to Stentor v1
| Model | Best Eval Loss | Best Perplexity | Improvement |
|---|---|---|---|
| Stentor-12M (v1) | 4.4887 | 89.01 | — |
| Stentor2-12M | 3.2814 | 26.61 | ↓70.1% perplexity |
Note: The comparison is close but not perfectly controlled — v1 trained on a mix of FineWeb-Edu and Cosmopedia v2, while Stentor2 trained on FineWeb-HQ and StenCore. Both use educational-quality text at the same parameter count.
BLiMP Evaluation Results
BLiMP (Benchmark of Linguistic Minimal Pairs) measures grammatical sensitivity by presenting 67 targeted minimal-pair contrasts — one grammatical sentence vs. one ungrammatical sentence — across a broad range of English syntactic phenomena. A score of 50% is chance; 100% is perfect.
| Metric | Value |
|---|---|
| Overall BLiMP Accuracy | 68.95% |
| ✅ Strong (≥ 85%) | 20 tasks |
| 🟡 Moderate (56–84%) | 29 tasks |
| ❌ Weak (≤ 55%) | 18 tasks |
✅ Strong Performance (≥ 85%)
Click to expand — 20 tasks
| Task | Accuracy | What it tests |
|---|---|---|
principle_A_case_1 |
100.00% | Reflexive pronouns must be locally bound by their antecedent (accusative case variant) |
irregular_past_participle_adjectives |
99.70% | Correct irregular past participle form used as an adjective (e.g., broken, not breaked) |
sentential_negation_npi_licensor_present |
99.20% | Negative polarity items (any, ever) are licensed by sentential negation (not, never) |
wh_vs_that_no_gap_long_distance |
97.50% | That (not a wh-word) is the correct complementizer in long-distance clauses without an extraction gap |
determiner_noun_agreement_1 |
97.40% | Determiner (a/an/the) must match noun in number — basic cases |
existential_there_quantifiers_1 |
95.80% | Existential there requires an indefinite quantified NP subject (There are some cats, not There are the cats) |
anaphor_number_agreement |
95.70% | Reflexives and reciprocals must match their antecedent in grammatical number |
determiner_noun_agreement_2 |
92.50% | Determiner-noun number agreement — additional test set |
wh_questions_subject_gap_long_distance |
92.40% | Wh-movement leaving a subject gap across a clause boundary (Who did you say ___ left?) |
wh_vs_that_no_gap |
92.10% | That (not wh-) complementizer is correct when no extraction gap is present in the clause |
determiner_noun_agreement_irregular_2 |
90.00% | Determiner-noun agreement with morphologically irregular nouns (e.g., mouse/mice) — set 2 |
determiner_noun_agreement_with_adjective_1 |
89.60% | Determiner agreement with noun when an adjective intervenes (a tall man, not an tall man) |
principle_A_domain_1 |
88.90% | Reflexives must be bound within their minimal local binding domain — set 1 |
wh_questions_subject_gap |
87.60% | Wh-movement with a gap in subject position (Who ___ left early?) |
irregular_plural_subject_verb_agreement_2 |
87.40% | Subject-verb agreement when subject is an irregular plural (The mice run, not The mice runs) — set 2 |
regular_plural_subject_verb_agreement_1 |
87.30% | Subject-verb agreement with regular plural subjects (The dogs bark, not The dogs barks) |
passive_1 |
86.10% | Passive constructions require the correct auxiliary (be) and past participle (was eaten, not was eat) |
principle_A_case_2 |
85.90% | Reflexive binding locality — nominative case variant |
passive_2 |
85.80% | Passive construction well-formedness — additional test set |
determiner_noun_agreement_with_adj_2 |
85.40% | Determiner-noun agreement across an intervening adjective — set 2 |
🟡 Moderate Performance (56–84%)
Click to expand — 29 tasks
| Task | Accuracy | What it tests |
|---|---|---|
tough_vs_raising_2 |
83.10% | Distinguishing tough constructions (John is easy to please) from subject raising (John seems to leave) — set 2 |
animate_subject_trans |
82.80% | Transitive verbs that semantically require an animate subject (The girl frightened him, not The rock frightened him) |
determiner_noun_agreement_with_adj_irregular_2 |
81.40% | Determiner-noun agreement across an adjective with irregular noun morphology — set 2 |
ellipsis_n_bar_2 |
79.70% | N-bar ellipsis using one as a pro-form (the red one for the red car) — set 2 |
existential_there_object_raising |
79.70% | Existential there raising with an object (There seems to be a problem) |
regular_plural_subject_verb_agreement_2 |
79.70% | Subject-verb agreement with regular plural subjects — additional test set |
determiner_noun_agreement_with_adj_irregular_1 |
78.60% | Determiner-noun agreement across an adjective with irregular noun morphology — set 1 |
irregular_past_participle_verbs |
78.50% | Correct irregular past participle in verbal use (has eaten, not has eated) |
irregular_plural_subject_verb_agreement_1 |
77.40% | Subject-verb agreement with irregular plural subjects (The children run) — set 1 |
transitive |
76.80% | Transitive verbs require an overt direct object (She ate the cake, not She ate) |
superlative_quantifiers_2 |
75.60% | Superlative quantifiers (at most N, at least N) must appear in correct syntactic environments — set 2 |
determiner_noun_agreement_irregular_1 |
75.50% | Determiner-noun agreement with morphologically irregular nouns — set 1 |
coordinate_structure_constraint_object_extraction |
75.00% | Extracting an object out of a coordinate structure is blocked (*What did she buy ___ and a hat?) |
expletive_it_object_raising |
73.30% | Object raising with expletive it (It seems that she left, well-formed vs. ill-formed variants) |
existential_there_subject_raising |
72.80% | Raising of the subject into an existential there construction (There seems to be a man) |
adjunct_island |
72.00% | Wh-extraction out of an adjunct clause is blocked (*Who did she leave because she saw ___?) |
distractor_agreement_relational_noun |
70.90% | Subject-verb agreement when a relational noun (e.g., the mother of the boys) intervenes as a distractor |
drop_argument |
70.20% | Verbs that require their object cannot drop it (She devoured the cake, not *She devoured) |
animate_subject_passive |
68.70% | Passive constructions with animate subjects in semantically restricted contexts |
ellipsis_n_bar_1 |
67.30% | N-bar ellipsis using one as a pro-form — set 1 |
anaphor_gender_agreement |
65.70% | Reflexives and reciprocals must match their antecedent in grammatical gender (herself vs. himself) |
principle_A_c_command |
65.60% | Reflexive pronoun must be c-commanded by its antecedent within its binding domain |
npi_present_1 |
65.50% | Negative polarity items (any, ever) must appear within the scope of a licensor — set 1 |
principle_A_domain_2 |
64.10% | Reflexives must be locally bound — set 2 (more complex embedding environments) |
npi_present_2 |
63.80% | Negative polarity item licensing — additional test set |
wh_island |
63.40% | Extraction from an embedded wh-clause is blocked (*Who do you wonder what ___ bought?) |
causative |
60.80% | Only certain verbs permit the causative alternation (She broke the vase → The vase broke, vs. *She arrived the train) |
intransitive |
60.60% | Intransitive verbs do not take a direct object (She arrived, not *She arrived the bus) |
wh_questions_object_gap |
59.10% | Wh-movement leaving a gap in object position (What did she buy ___?) |
❌ Weak Performance (≤ 55%)
Click to expand — 18 tasks
| Task | Accuracy | What it tests |
|---|---|---|
only_npi_licensor_present |
52.90% | Only can license negative polarity items within its scope (Only John has ever left) |
principle_A_domain_3 |
50.50% | Reflexive binding locality — set 3 (complex embedded environments) |
inchoative |
49.50% | Only certain verbs allow the inchoative alternation (The ice melted but *The chef melted) |
superlative_quantifiers_1 |
48.00% | Superlative quantifiers (at most, at least) used in correct syntactic environments — set 1 |
sentential_negation_npi_scope |
46.40% | NPI must fall within the semantic scope of sentential negation, not outside it |
distractor_agreement_relative_clause |
42.50% | Subject-verb agreement when a relative clause with a different-number noun intervenes as a distractor |
only_npi_scope |
41.60% | NPI must be within the c-command scope of only, not outside it |
complex_NP_island |
41.00% | Extraction from inside a complex NP (e.g., a relative clause inside an NP) is blocked |
coordinate_structure_constraint_complex_left_branch |
40.20% | The coordinate structure constraint blocks extraction from a complex left branch of a coordinate |
principle_A_reconstruction |
40.20% | Reflexive binding applies at the reconstructed (LF) position, not the surface position |
left_branch_island_echo_question |
40.00% | Extraction from the left branch of an NP (e.g., an adjective) is blocked in echo questions |
sentential_subject_island |
39.30% | Extraction from a sentential subject (*Who is that John left obvious?) is blocked |
left_branch_island_simple_question |
38.80% | Extraction from the left branch of an NP is blocked in simple questions (*How tall is the ___ man?) |
tough_vs_raising_1 |
34.00% | Distinguishing tough constructions from subject raising — set 1 |
wh_vs_that_with_gap |
33.80% | A wh-complementizer (not that) is required when an extraction gap is present in the embedded clause |
existential_there_quantifiers_2 |
29.60% | Existential there with quantified NP subjects — more complex or marked cases |
matrix_question_npi_licensor_present |
16.30% | NPI licensing in matrix (root) questions (Has she ever left? vs. Has she left ever?) |
wh_vs_that_with_gap_long_distance |
11.00% | Wh-complementizer is required (not that) when a long-distance extraction gap is present |
BLiMP Summary by Linguistic Category
| Domain | Avg. Score | Strengths | Weaknesses |
|---|---|---|---|
| Agreement | ~80% | Det-noun & number agreement (87–97%) | Distractor via relative clause (42%) |
| Anaphora / Binding | ~72% | Principle A case 1 (100%), domain 1 (89%) | Reconstruction (40%), domain 3 (50%) |
| NPI Licensing | ~57% | Sentential negation licensor (99%) | Matrix question (16%), wh-gap long-distance (11%) |
| Island Constraints | ~50% | Object extraction (75%), adjunct island (72%) | Left branch (39–40%), sentential subject (39%) |
| Argument Structure | ~73% | Passive (86%), animate subject trans (83%) | Inchoative (49%), causative (61%) |
| Filler-Gap / Wh | ~70% | Subject gap long-distance (92%), no-gap (92%) | With-gap long-distance (11%), with-gap (34%) |
| Raising & Existential | ~72% | Object raising (80%), subject raising (73%) | Existential quantifiers set 2 (30%) |
| Ellipsis | ~73% | N-bar ellipsis set 2 (80%) | N-bar ellipsis set 1 (67%) |
Interpretation: Stentor2-12M shows solid grammatical intuitions for surface-level agreement, basic anaphora, and simple filler-gap dependencies — phenomena learnable from distributional patterns in text. It struggles most with phenomena requiring abstract syntactic sensitivity: scope-based NPI licensing, island extraction constraints, and complementizer selection under long-distance movement. These results are typical for a 12M parameter base model and broadly consistent with expectations from the scaling literature.
Training Dynamics
The training run processed 480,116,736 tokens across 2 epochs, stopping when the token budget was reached at approximately step 3,662 (well before the configured max_train_steps of 9,552).
Epoch 0 (~2,350 steps, 3,497s): Loss dropped rapidly from ~5.0 to ~3.2. Best checkpoint checkpoints updated at steps 1,500 (3.5410) and 2,400 (3.3743). Epoch-end eval: loss 3.3776, ppl 29.30.
Epoch 1 (~1,310 additional steps, 1,906s): Continued improvement. New best at step 3,300 (loss 3.2814). Token budget hit shortly after step 3,600. Epoch-end eval: loss 3.2565, ppl 25.96.
Throughput: ~91,600–92,000 tokens/sec average throughout training.
Use Cases & Intended Uses
| Use Case | Suitability | Notes |
|---|---|---|
| Studying transformer training dynamics | ✅ High | Small enough to train/fine-tune on free compute |
| Tokenization efficiency research | ✅ High | 8K vs 32K vocab tradeoff directly observable |
| Speculative decoding experiments | ✅ High | Fast enough to serve as a draft model |
| Benchmarking CPU/edge inference latency | ✅ High | ~12MB in FP16, runs on any hardware |
| Testing quantization/conversion pipelines | ✅ High | GGUF, ONNX, INT8 pipeline validation |
| Teaching material for LLM courses | ✅ High | Architecture simple enough to trace by hand |
| Text continuation / creative prompting | ✅ High | Works well enough for basic continuation |
| Domain-specific fine-tuning research | ✅ High | Small enough to iterate rapidly |
| Factual Q&A | ❌ Not suitable | No reliable world knowledge |
| Production deployment | ❌ Not suitable | No safety tuning |
| Non-English text | ❌ Not suitable | TokenMonster vocab is English-only |
| Long-document tasks (Only 1k context) | ❌ Not suitable | Context too small |
Out-of-Scope Uses
- User-facing applications of any kind — No safety filtering, no alignment, no factual reliability.
- Medical, legal, or financial advice — 12M parameters cannot store or reason over specialized knowledge.
- Generating content about real people — Outputs mentioning real people are likely to be fabricated.
- Automated content pipelines — Output quality is insufficient for unreviewed publication.
- Non-English use — Vocabulary is English-only.
- Instruction following — This is a base model.
Ethical Considerations & Societal Impact
Inherited Data Biases
Trained on FineWeb-HQ and StenCore, a filtered subset of Common Crawls on the web and PDFs. Inherits:
- Western-centric perspective — Educational content skews toward Western viewpoints.
- English monolingualism — Training data and vocabulary are English-only.
- Demographic underrepresentation — Groups underrepresented in English educational web content will be underrepresented in outputs.
No Safety Tuning
No RLHF, no DPO, no constitutional AI, no content filtering.
Positive Aspects
- Democratizing AI research — Trained entirely on free-tier Kaggle compute.
- Transparency — Full training hyperparameters, architecture details, and deep details published.
- Minimal environmental footprint — ~2.1 hours of dual-GPU compute.
Inference Guide
Basic Generation
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("StentorLabs/Stentor2-12M", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("StentorLabs/Stentor2-12M", trust_remote_code=True)
model = model.to("cuda").eval()
def generate(prompt, max_new_tokens=50, temperature=0.9, top_p=0.65):
input_ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long).to(model.device)
attention_mask = torch.ones_like(input_ids)
with torch.inference_mode():
output = model.generate(
input_ids,
attention_mask=attention_mask,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_p=top_p,
repetition_penalty=1.15,
pad_token_id=tokenizer.pad_token_id,
)
new_ids = output[0][input_ids.shape[1]:].tolist()
return tokenizer.decode(new_ids).strip()
print(generate("The history of computing began"))
🚀 Free Inference — Try It Now
No GPU, no setup, no API key required.
StentorLabs hosts a free, unlimited inference demo for all Stentor models at:
🔗 https://huggingface.co/spaces/StentorLabs/StentorLabs-demo_space
Features
| Mode | Description |
|---|---|
| Normal Generation | Standard text completion with adjustable temp, top-p, repetition penalty, and max tokens |
| Token Probability Explorer | Inspect per-token probability distributions as the model generates |
| Temperature Sweep | Generate the same prompt across multiple temperatures simultaneously to compare behavior |
| Head-to-Head Comparison | Run two Stentor models side-by-side on the same prompt |
All parameters (temperature, top-p, repetition penalty, max tokens) are fully customizable. Inference is free and unlimited.
Real Model Responses
The following are unfiltered outputs from Stentor2-12M. The first three are the very first generations run (no cherry-picking), the fourth is a deliberately bad example, and the fifth is a deliberately good one.
Sample 1 — temp=0.8, top_p=0.7, rep_penalty=1.2, max_tokens=140
Prompt: 5 Tips and Tricks for Difficult Homework.
SPECIAL STUDY SUBJECT: The Earth's Fall Times <22><22><22> Review the article with reference to
"A Tyre" <22><22><22> Revise your answers carefully and read it carefully. Write down what you
think you did and what you think it had to do with your answer. <22><22><22> Review your notes.
Read a book from the book, and review your answers. What would you do? <22><22><22> Do some
research findings on your own reading. <22><22><22> Invite students to read them. <22><22><22>
Try to learn the text by using different sources. <22>
The model completely ignores the prompt and defaults to academic/educational framing. PDF tokens (
<22>) appear throughout as list separators — a characteristic artifact of the training data.
Sample 2 — temp=0.8, top_p=0.7, rep_penalty=1.3, max_tokens=140
Prompt: How to Properly Cite Sources
The Yellowstone Elementary School (CFP) is committed to providing an integrated and well-rounded
education system for students of all ages. The curriculum is designed to promote academic
achievement, instructional opportunities and a wider range of subject areas of study and practice
as well as teaching the student through its own learning style. This program is an important part
of the school's success in school and homework are set out in the district. We believe that all
pupils who receive a free school year will be able to attend classroom lessons from grade level
levels and will be required to complete their classes at any time during their summer break. We
are pleased that our current PSHCE
Coherent, grammatically sound institutional prose — but completely unrelated to citing sources. Classic Stentor2 behavior: latches onto educational register and produces confident, fluent non-sequitur.
Sample 3 — temp=0.6, top_p=0.75, rep_penalty=1.3, max_tokens=120
Prompt: How to Lose Weight Fast
The U.S. Department of Agriculture (USDA) is committed to providing the necessary services to
the economy and professions of urban development, as well as support for the protection of the
environment and the preservation of all living things within the community. The U.S. Government
provides a framework for developing this information in partnership with local organizations that
provide education, health care, and employment opportunities for all people affected by the
COVID-19 pandemic. We aim to ensure that the government's commitment to sustainable housing
through increased infrastructure and public transportation systems; therefore, we seek to
No connection to weight loss. Government/policy language — another common register for this model. Fluent, well-structured, completely off-topic.
Sample 4 (Hand-Picked Bad) — temp=0.8, top_p=0.5, rep_penalty=1.0, max_tokens=40
Prompt: Academics
<22><22><22><22><22><22><22><22><22><22><22><22><22><22><22><22><22><22><22><22><22><22><22>
<22><22><22><22><22><22><22><22><22><22><22><22><22><22>
Pure PDF token collapse. This is what happens with
rep_penalty=1.0(off),top_p=0.5(too low), and a one-word prompt. All three conditions together push the model into its PDF-token attractor state. Do not use these settings.
Sample 5 (Hand-Picked Good) — temp=1.0, top_p=0.7, rep_penalty=1.4, max_tokens=45
Prompt: How to do Homework Assignments
Reasoning and Decision Making This Project is a great way to help students to succeed in school.
The project helps the student achieve better, the more they need to grow their faiths and to
support them in success as they become
Clean, readable, no PDF tokens, no repetition. Not exactly on-topic (pivot to "project" framing is typical), but this is what good Stentor2 output looks like: coherent educational prose, correct grammar, sensible continuation.
Quantization
FP16
model = AutoModelForCausalLM.from_pretrained("StentorLabs/Stentor2-12M", torch_dtype=torch.float16)
model = model.to("cuda")
Dynamic INT8 (CPU)
import torch
model_int8 = torch.quantization.quantize_dynamic(
model.to("cpu"),
{torch.nn.Linear},
dtype=torch.qint8,
)
Format Conversion
Convert to GGUF (for llama.cpp)
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && pip install -r requirements.txt
huggingface-cli download StentorLabs/Stentor2-12M --local-dir stentor2-12m
python convert_hf_to_gguf.py stentor2-12m/ \
--outfile stentor2-12m.gguf \
--outtype f16
./llama-quantize stentor2-12m.gguf stentor2-12m-q4_0.gguf q4_0
./llama-cli -m stentor2-12m-q4_0.gguf -p "The science of" -n 50
Convert to ONNX
pip install optimum[exporters]
optimum-cli export onnx \
--model StentorLabs/Stentor2-12M \
--task text-generation-with-past \
stentor2-12m-onnx/
Speculative Decoding
Stentor2-12M can serve as a fast draft model to accelerate inference from larger Llama-family target models.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
draft_model = AutoModelForCausalLM.from_pretrained(
"StentorLabs/Stentor2-12M", torch_dtype=torch.float16
).to("cuda")
target_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-1B",
torch_dtype=torch.float16,
device_map="auto"
)
target_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
inputs = target_tokenizer("Explain the concept of recursion", return_tensors="pt")
outputs = target_model.generate(
**inputs,
assistant_model=draft_model,
do_sample=True,
max_new_tokens=100
)
print(target_tokenizer.decode(outputs[0], skip_special_tokens=True))
Important caveat: Stentor2 uses a different vocabulary (8,064-token TokenMonster) than standard Llama models (32,000-token BPE). This vocabulary mismatch means the target model's acceptance rate may be lower than with a vocabulary-compatible draft model.
Bias, Risks & Limitations
- Prompt Relevance: Outputs frequently off-topic for complex prompts.
- Factual Accuracy: All factual claims should be treated as unreliable.
- Context Boundary: Hard limit of 1,024 tokens.
- English Bias: TokenMonster English vocabulary; other languages tokenize poorly.
- Training Data Bias: Inherits biases in English-language educational web and PDF text.
- Hallucination: May produce confident but fabricated content.
- No Alignment: No RLHF, no DPO, no constitutional training.
- Tokenizer Efficiency: 8K TokenMonster vocabulary produces more tokens per word than standard 32K BPE. This is expected and not a bug.
- Shared Tensor Warning:
Removed shared tensor {'lm_head.weight'}is expected behavior from tied word embeddings. Safe to ignore.
Related Work
Comparable Sub-50M Models
| Model | Parameters | Best Perplexity | BLiMP Accuracy | Training Data | Notes |
|---|---|---|---|---|---|
| Stentor2-12M (this model) | 12.3M | 26.61 | 68.95% | FineWeb-HQ 480M tokens | Base model, TokenMonster vocab |
| Stentor-12M (v1) | 12.0M | 89.01 | — | FineWeb-Edu + Cosmopedia 200M | Baseline this model improves on |
| Stentor-30M (v1) | 30.4M | 33.02 | — | FineWeb-Edu + Cosmopedia 600M | Larger v1 model |
| TinyStories-33M | ~33M | varies | — | TinyStories (synthetic) | Story generation focused |
| Pythia-14M | 14M | varies (Pile) | — | The Pile 300B tokens | EleutherAI scaling baseline |
Comparison caveats: Perplexity numbers are not directly comparable across models — different validation sets, vocabularies, and tokenizers all affect the number.
Related Research Papers
| Paper | Relevance |
|---|---|
| TinyStories — Eldan & Li, 2023 | Demonstrates meaningful generation from 1M–33M parameter models |
| Pythia — Biderman et al., 2023 | Systematic study of small model scaling |
| Scaling Laws — Kaplan et al., 2020 | Informs token budget decisions |
| Chinchilla — Hoffmann et al., 2022 | ~480M tokens for 12M params is compute-optimal under this analysis |
| RoPE — Su et al., 2021 | Positional encoding used in this model |
| Speculative Decoding — Leviathan et al., 2023 | Primary use case for a fast draft model |
| T5 — Raffel et al., 2020 | Source of NFKC text normalization approach |
Environmental Impact
| Factor | Value |
|---|---|
| Hardware | 2× NVIDIA Tesla T4 |
| Active Training Duration | ~2.1 hours |
| Cloud Provider | Kaggle (free tier) |
| Compute Region | Western USA |
| Estimated Carbon | Minimal (< 0.3 kg CO₂e estimated) |
Citation
@misc{izumoto2026stentor2_12m,
title = {Stentor2-12M},
author = {Kai Izumoto},
year = {2026},
publisher = {StentorLabs},
howpublished = {\url{https://huggingface.co/StentorLabs/Stentor2-12M}},
note = {12.3M parameter LlamaForCausalLM base model trained on
FineWeb-HQ and StenCore with a TokenMonster 8K vocabulary.
Trained on 2x Tesla T4 GPUs. Apache 2.0 license.}
}
Model Card Contact
Questions, benchmarks, or feedback: StentorLabs@gmail.com or open a discussion.
Made with ❤️ by StentorLabs
Democratizing AI through accessible, efficient models
- Downloads last month
- 20
Datasets used to train StentorLabs/Stentor2-12M
Collection including StentorLabs/Stentor2-12M
Papers for StentorLabs/Stentor2-12M
TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling
Fast Inference from Transformers via Speculative Decoding
Training Compute-Optimal Large Language Models
RoFormer: Enhanced Transformer with Rotary Position Embedding
Evaluation results
- Best Validation Loss on FineWeb-HQ (validation split)self-reported3.281
- Best Perplexity (at best checkpoint) on FineWeb-HQ (validation split)self-reported26.610
- Final Epoch Validation Loss on FineWeb-HQ (validation split)self-reported3.256
- Final Epoch Perplexity on FineWeb-HQ (validation split)self-reported25.960
- BLiMP Overall Accuracy on FineWeb-HQ (validation split)self-reported68.950