Stentor2-30M
🔬 Research Artifact — Not a Production Model. This model has no safety tuning and is not suitable for deployment in any user-facing application. See Intended Uses for details.
Table of Contents
- What Is This?
- The Core Design Insight: Vocabulary Efficiency
- Head-to-Head: Stentor v1 vs Stentor2-30M
- Quick Start
- Important Limitations
- Honest Notices
- PDF Tokens & The Replacement Character
- Model Architecture — Full Specification
- The Tokenizer: TokenMonster
- Training Infrastructure
- Training Hyperparameters — Complete Reference
- The T4 Mixed-Precision Recipe — Deep Dive
- Data Pipeline
- Weight Initialization
- Evaluation & Results
- Training Dynamics
- Use Cases & Intended Uses
- Out-of-Scope Uses
- Ethical Considerations & Societal Impact
- Inference Guide
- Free Inference — Try It Now
- Real Model Responses
- Quantization
- Format Conversion
- Speculative Decoding
- Bias, Risks & Limitations
- Related Work
- Environmental Impact
- Citation
What Is This?
Stentor2-30M is the second production release from the Stentor2 model family — a ground-up redesign of the original Stentor v1 line. At ~30.4M parameters, it is a compact base language model built entirely from scratch on free-tier Kaggle compute using two NVIDIA Tesla T4 GPUs.
Like all Stentor models, this is a base next-token predictor, not a chat assistant. It will not reliably follow instructions, has no safety tuning, and is best used for research, prototyping, speculative decoding, and edge-deployment experimentation. The value of this model is not its conversational capability — it's what it represents architecturally: a dramatic efficiency gain over v1 at the same scale, achieved by fixing the root cause of v1's underperformance.
The Core Design Insight: Vocabulary Efficiency
The most consequential change in Stentor2 is the replacement of the standard Llama/Mistral 32,768-token vocabulary with a purpose-built 8,000-token English vocabulary from the TokenMonster project (english-8000-strict-nocapcode-v1, padded to 8,064 for hardware alignment).
This is not a minor tweak — it is the entire architectural story of Stentor2.
Why Vocabulary Size Matters So Much at This Scale
In a transformer language model, the embedding table has shape [vocab_size × hidden_size]. When you tie word embeddings (share the embedding and output projection weights, which Stentor does), this table appears once in the parameter count. At 30M total parameters, the fraction consumed by this table dictates how much "brain" is left over for the actual transformer layers.
Stentor-30M (v1) used a 32,768-token vocabulary. At its hidden size, the embedding table consumed a disproportionate share of the total parameter budget, leaving less capacity for the transformer stack — the part that actually learns language patterns.
Stentor2-30M uses an 8,064-token vocabulary. At a hidden size of 512:
embedding_params = 8,064 × 512 = 4,128,768
total_params = 30,353,920
embedding_share = 13.6%
By shrinking the vocabulary, the embedding table takes up only 13.6% of the model, leaving over 86% of parameters free for the transformer depth and width that directly drive language modeling quality.
The result is a ~45.3% reduction in perplexity compared to Stentor-30M v1 (33.02 → 18.07), and a ~32.1% reduction compared to its smaller sibling Stentor2-12M (26.61 → 18.07).
Head-to-Head: Stentor v1 vs Stentor2-30M
| Property | Stentor-30M (v1) | Stentor2-30M |
|---|---|---|
| Vocabulary | 32,768 (Mistral BPE) | 8,064 (TokenMonster English) |
| Hidden Size | 256 | 512 |
| Intermediate Size | 1,024 | 1,024 |
| Num Layers | 21 | 10 |
| Attention Heads | 4 | 8 |
| Head Dimension | 64 | 64 |
| Context Length | 512 tokens | 1,024 tokens |
| Total Parameters | 30,419,712 | 30,353,920 |
| Embedding Share | 27.6% | 13.6% |
| Non-Embedding Params | ~22.0M | ~26.2M |
| Source Token Budget | 600M | 400M |
| Total Token Budget | ~600M | 800M |
| Training Time | ~7.88h | ~6.75h |
| Training Processes | 1 | 2 |
| Best Perplexity | 33.02 | 18.07 |
| Perplexity Reduction | — | ~45.3% |
| Tokenizer | Mistral BPE | TokenMonster |
| Architecture | LlamaForCausalLM | LlamaForCausalLM |
| Training Precision | fp16 | fp16 + FP32 norms/critical layers |
🚀 Quick Start
1. Install Dependencies
pip install transformers torch safetensors huggingface_hub tokenmonster
2. Load the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"StentorLabs/Stentor2-30M",
torch_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained(
"StentorLabs/Stentor2-30M",
trust_remote_code=True,
)
3. Generate Text
input_ids = torch.tensor([tokenizer.encode("The history of computing")], dtype=torch.long).to(next(model.parameters()).device)
attention_mask = torch.ones_like(input_ids)
with torch.inference_mode():
output = model.generate(
input_ids,
attention_mask=attention_mask,
max_new_tokens=80,
do_sample=True,
temperature=1.1,
top_p=0.55,
repetition_penalty=1.15,
pad_token_id=tokenizer.pad_token_id,
)
print(tokenizer.decode(output[0].tolist()))
Why
attention_mask? The model's pad token and EOS token are the same ID. Without an explicit attention mask, HuggingFace throws a warning because it can't tell which tokens are real vs padding. Passingtorch.ones_like(input_ids)tells the model that every token in the input is real.
4. Recommended Generation Settings
| Parameter | Recommended Range | Personal Favorite | Notes |
|---|---|---|---|
temperature |
0.6 – 0.9 | 0.8 | Below 0.6 causes heavy <22> (PDF token) output; above 0.9 gets chaotic |
top_p |
0.6 – 0.8 | 0.7 | Below 0.6 also increases <22> tokens significantly |
repetition_penalty |
1.1 – 1.4 | 1.2 | Without this the model will loop; 1.2–1.3 hits the sweet spot |
max_new_tokens |
10 – 500 | — | Model stays mostly on topic but may drift at longer lengths |
⚠️ Always use
repetition_penalty ≥ 1.1. Without it, this model will fall into repetitive loops.
⚠️ Keep temperature and top_p above 0.6. Going below this threshold causes the model to lean heavily on PDF-derived tokens that render as
<22>. See PDF Tokens for details.
⚠️ Important Limitations
- Not Instruction-Tuned: This is a base model. It will often ignore prompts, continue in unexpected directions, or respond off-topic.
- No Safety Tuning: No RLHF, no constitutional AI, no content filtering.
- Limited World Knowledge: ~30M parameters cannot store meaningful world knowledge.
- Context Window: Hard limit of 1,024 tokens.
- English Only: The TokenMonster vocabulary is English-specific.
- TokenMonster Required: Uses a TokenMonster adapter. Make sure
tokenmonsteris installed (pip install tokenmonster). StandardAutoModelForCausalLMworks — but the tokenmonster package is still required for the tokenizer. - PDF Tokens (
<22>): The model was trained on data containing PDF-extracted text, and will sometimes generate tokens that render as<22>(the Unicode replacement character). See the PDF Tokens section below. - Repetition Without Penalty: Without
repetition_penalty, the model will fall into loops. Always userepetition_penalty ≥ 1.1. - Shared Tensor Warning: When saving or loading, you may see:
Removed shared tensor {'lm_head.weight'} while saving. This is expected from tied word embeddings and is safe to ignore.
📋 Honest Notices
These are candid, first-hand observations about how this model actually behaves.
Noticeably more coherent than Stentor-30M (v1). With the same parameter count but a far more efficient vocabulary allocation and 800M training tokens, output quality is a clear step up.
Generates PDF tokens (
<22>). The model will sometimes output tokens that display as<22>— the Unicode replacement character. These are valid tokens from PDF-extracted training data. They are not a decoding error. See PDF Tokens for details.Prone to repetition without
repetition_penalty. Left unchecked, the model loops. Withrepetition_penaltyset to 1.1–1.4, output quality is mostly stable.No custom model loader required. 🎉 This model loads with standard
AutoModelForCausalLM.from_pretrained(). No special loading code needed.You still need to install
tokenmonster. The tokenizer wraps TokenMonster and requirespip install tokenmonster. It's a one-line install, but it is a required dependency.The model talks about education and academics — a lot. Trained on FineWeb-HQ (a high-quality filtered web corpus with significant PDF and educational content) and StenCore (100% PDFs), the model has a strong prior toward academic language, school systems, curriculum, research, and formal writing.
This model will usually stop on its own. Like Stentor2-12M, this model tends to emit an EOS token and halt before hitting
max_new_tokens. The exact stopping point varies widely. It's still recommended to set a generous ceiling (e.g.max_new_tokens=1000) as a safety net.
PDF Tokens & The Replacement Character
Stentor2-30M was trained on FineWeb-HQ and StenCore, which includes a substantial amount of text extracted from PDF documents. PDFs often contain binary sequences or encoding artifacts that survive text extraction as raw bytes outside the standard UTF-8 range. These get tokenized and trained on as valid tokens.
As a result, the model has learned to generate these tokens — and will do so, especially at lower temperatures or lower top-p values. When decoded, they render as <22> (U+FFFD, the Unicode replacement character), because most systems substitute this symbol for unrepresentable byte sequences.
How to reduce it: Keep temperature ≥ 0.6 and top_p ≥ 0.6. Below these thresholds, PDF token probability rises sharply. At well-tuned settings (temp 0.8, top_p 0.7), <22> tokens appear occasionally but do not dominate output.
This is not a bug. Most language models are trained on clean text and never encounter these sequences. Stentor2-30M is unusual in that it can generate them — a side effect of training on real-world PDF-extracted data.
Model Architecture — Full Specification
Stentor2-30M is a LlamaForCausalLM model.
Core Configuration
| Component | Value | Derivation |
|---|---|---|
| Architecture | LlamaForCausalLM |
Hard-coded in training script |
| Hidden Size | 512 | embedding_params (4,128,768) ÷ vocab_size (8,064) = 512 ✓ |
| Intermediate Size (FFN) | 1,024 | Hidden × 2 |
| Num Hidden Layers | 10 | Verified via total param count formula |
| Num Attention Heads | 8 | Hidden ÷ head_dim = 512 ÷ 64 = 8 |
| Num Key/Value Heads | 8 | Full MHA (no GQA at this scale) |
| Head Dimension | 64 | Standard |
| Vocab Size | 8,064 | TokenMonster 8K base + 62 padding tokens (multiple of 128) |
| Max Position Embeddings | 1,024 | block_size in training script |
| Hidden Activation | SiLU | LlamaForCausalLM default |
| Positional Encoding | RoPE | rope_theta = 10,000.0 |
| RMS Norm Epsilon | 1e-5 | Default in training script |
| Tie Word Embeddings | True | Shared embedding / LM head weights |
| Attention Implementation | SDPA | PyTorch Scaled Dot Product Attention |
Parameter Count Breakdown
def estimate_llama_params(vocab_size, hidden_size, intermediate_size,
num_hidden_layers, num_attention_heads, num_key_value_heads):
kv_dim = int(hidden_size * num_key_value_heads / num_attention_heads)
attn = 2 * hidden_size * hidden_size + 2 * hidden_size * kv_dim
mlp = 3 * hidden_size * intermediate_size
norm = 2 * hidden_size
total = vocab_size * hidden_size + num_hidden_layers * (attn + mlp + norm) + hidden_size
return total
Plugging in Stentor2-30M values:
kv_dim = 512 * 8 / 8 = 512
attn = 2×512×512 + 2×512×512 = 1,048,576
mlp = 3×512×1024 = 1,572,864
norm = 2×512 = 1,024
per_layer = 2,622,464
embedding = 8,064 × 512 = 4,128,768
layers = 10 × 2,622,464 = 26,224,640
final_norm = 512
total = 4,128,768 + 26,224,640 + 512 = 30,353,920 ✓
| Component | Parameters | % of Total |
|---|---|---|
| Embedding Table (tied with LM Head) | 4,128,768 | 13.6% |
| Transformer Layers × 10 | 26,224,640 | 86.4% |
| — Attention (per layer × 10) | 10,485,760 | 34.5% |
| — FFN/MLP (per layer × 10) | 15,728,640 | 51.8% |
| — Layer Norms (per layer × 10) | 10,240 | 0.03% |
| Final RMS Norm | 512 | 0.002% |
| Total | 30,353,920 | 100% |
The Tokenizer: TokenMonster
Stentor2-30M uses the same custom tokenizer adapter as Stentor2-12M, wrapping the TokenMonster english-8000-strict-nocapcode-v1 vocabulary.
What Is TokenMonster?
TokenMonster (alasdairforsythe/tokenmonster) is an alternative tokenization approach optimized for compact English vocabulary sizes.
Vocabulary Construction
- Base vocabulary loaded from
alasdairforsythe/tokenmonster→vocabs/english-8000-strict-nocapcode-v1.vocab - Special tokens added:
</s>(EOS),<s>(BOS),<pad>(set equal to EOS) - A default chat template injected for structural compatibility
- Vocabulary padded to the nearest multiple of 128 → 8,064 tokens
Tokenizer Configuration
{
"tokenizer_type": "tokenmonster",
"vocab_file": "tokenmonster.vocab",
"model_max_length": 1024,
"eos_token": "</s>",
"bos_token": "<s>",
"pad_token": "</s>",
"vocab_size": 8064
}
Chat Template
{% for message in messages %}
<|{{ message['role'] }}|>
{{ message['content'] }}
{% endfor %}
{% if add_generation_prompt %}<|assistant|>
{% endif %}
Training Infrastructure
Hardware
| Component | Specification |
|---|---|
| GPU Count | 2× NVIDIA Tesla T4 |
| VRAM per GPU | 15.64 GB |
| Total VRAM | ~31.3 GB |
| Active Training Processes | 2 (dual-process via HuggingFace Accelerate) |
| Platform | Kaggle Notebooks (free tier) |
| Accelerator Library | HuggingFace Accelerate |
Software Stack
| Package | Role |
|---|---|
| PyTorch | Core tensor operations and autograd |
| HuggingFace Transformers | Model architecture (LlamaForCausalLM) |
| HuggingFace Accelerate | Training loop and device management |
| HuggingFace Datasets | Data loading |
| bitsandbytes | Dual-GPU optimization |
| tokenmonster | Custom vocabulary |
| safetensors | Model serialization |
Training Hyperparameters — Complete Reference
Core Training Parameters
| Hyperparameter | Value | Notes |
|---|---|---|
learning_rate |
8e-4 | AdamW LR |
weight_decay |
0.01 | Applied to non-embedding, non-norm, non-bias params |
max_grad_norm |
1.0 | Gradient clipping threshold |
optimizer |
AdamW | With betas=(0.9, 0.95), eps=1e-8 |
scheduler |
Cosine | Cosine decay with linear warmup |
warmup_steps |
3,031 | Linear ramp from 0 → peak LR |
stable_steps |
48,500 | Configured stable phase |
source_token_budget |
400,000,000 | Source data token cap |
token_budget |
800,000,000 | Total training tokens (2 epochs) |
max_train_steps |
60,626 | Configured limit; token budget reached first at ~30,300 |
seed |
42 | Reproducibility seed |
mixed_precision |
fp16 | All activations/gradients in FP16 |
Batch & Sequence Parameters
| Hyperparameter | Value | Notes |
|---|---|---|
total_batch_size |
8 | Across both processes |
block_size |
1,024 | Sequence length; training packed to this size |
tokens_per_optimizer_step |
8,192 | total_batch_size × block_size |
num_train_epochs |
2 | Both epochs completed |
Evaluation & Checkpointing
| Hyperparameter | Value |
|---|---|
eval_steps |
900 |
best_eval_start_step |
1,500 |
logging_steps |
300 |
max_eval_samples |
5,000 |
AdamW Optimizer — Detailed
- Decay group: All
nn.Linearweight matrices →weight_decay = 0.01 - No-decay group: Bias terms, normalization parameters, embedding parameters →
weight_decay = 0.0 - Betas:
(0.9, 0.95) - Epsilon:
1e-8 - Fused kernel: Enabled when CUDA is available
Learning Rate Schedule
Phase 1 — Warmup (steps 0–3,031):
LR ramps linearly from 0 → 8e-4
Phase 2 — Cosine Decay (steps 3,031–60,626):
LR follows cosine curve from 8e-4 → 0
(Training ended early at ~30,300 steps due to token budget)
The T4 Mixed-Precision Recipe — Deep Dive
The training pipeline uses a custom T4 Mixed-Precision Recipe designed for stable fp16 training on NVIDIA Tesla T4 GPUs.
1. FP32 Normalization Layers (21 modules)
All RMSNorm modules are monkey-patched to run in FP32 regardless of input dtype:
def _fp32_norm_forward(hidden_states, *args, **kwargs):
input_dtype = hidden_states.dtype
output = original_forward(hidden_states.float().contiguous(), *args, **kwargs)
return output.clone().to(input_dtype)
The .clone() call prevents returning graph-managed buffers that can be overwritten. The .contiguous() prevents strided-tensor issues in FP32 norm ops.
Count: 10 layers × 2 norms each (input + post-attention) + 1 final norm = 21 modules total.
2. FP32 Critical Layers (2 layers)
The first 2 transformer layers are designated as critical and run entirely in FP32:
- Weights cast to
.float()at setup time forward()monkey-patched to cast all inputs to FP32 and outputs back to original dtypetorch.amp.autocast("cuda", enabled=False)prevents re-downcasting inside the layer
Rationale: The first layers handle embedding projection and initial feature extraction. Running these in FP32 provides a stability floor at minimal compute cost.
3. FP32 Attention Softmax — Skipped
The FP32 softmax wrapper was not applied (0 modules). PyTorch's SDPA implementation handles numerical stability internally and requires fp16/bf16 inputs for its optimized code paths.
T4 Recipe Summary
| Technique | Count | Scope |
|---|---|---|
| FP32 norm modules | 21 | All RMSNorm layers |
| FP32 critical layers | 2 | First 2 transformer layers |
| FP32 softmax modules | 0 | Skipped — SDPA incompatible |
Data Pipeline
Dataset
The model was trained on FineWeb-HQ and StenCore (epfml/FineWeb-HQ) and (StentorLabs/StenCore) — high-quality filtered web and PDF corpora.
Total tokens processed: ~800M (2 epochs of 400M source tokens) Materialized rows: 1,340,163 raw rows
Text Preprocessing
def clean_text(text: str) -> str:
text = unicodedata.normalize("NFKC", text)
lines = [line.strip() for line in text.splitlines() if line.strip()]
text = " ".join(lines)
text = " ".join(text.split())
return text
- NFKC normalization maps visually equivalent Unicode characters to canonical form
- Whitespace collapse ensures consistent tokenization
Sequence Packing
After tokenization, samples are packed into fixed 1,024-token blocks. Labels for packed sequences are identical to input_ids (causal LM). No special boundary masking between packed samples.
Weight Initialization
def initialize_weights(model, std=0.02):
for module in model.modules():
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=std)
if module.bias is not None:
module.bias.data.zero_()
elif "rmsnorm" in type(module).__name__.lower():
if module.weight is not None:
module.weight.data.fill_(1.0)
if module.bias is not None:
module.bias.data.zero_()
- Linear/Embedding layers: normal(0, 0.02)
- RMSNorm scale weights: 1.0 (identity at start)
- Biases: zero
Evaluation & Results
Training Curves
Raw data available in
training_curves.csv— columns:step,epoch,train_loss,eval_loss,eval_ppl,note.
Results Summary
| Checkpoint | Step | Eval Loss | Perplexity |
|---|---|---|---|
| First best | 1,500 | 3.9400 | 51.42 |
| Step 2,400 | 2,400 | 3.6531 | 38.59 |
| Step 6,000 | 6,000 | 3.3048 | 27.24 |
| Step 9,600 | 9,600 | 3.1720 | 23.86 |
| Epoch 0 end | ~15,000 | 3.0470 | 21.05 |
| Step 21,300 | 21,300 | 2.9486 | 19.08 |
| Step 27,600 | 27,600 | 2.8975 | 18.13 |
| Best / Final Checkpoint | ~30,300 | 2.8944 | 18.07 |
Comparison Across the Stentor Family
| Model | Parameters | Best Perplexity | Training Tokens | Notes |
|---|---|---|---|---|
| Stentor-12M (v1) | 12.0M | 89.01 | 200M | Baseline |
| Stentor-30M (v1) | 30.4M | 33.02 | 600M | Larger v1 |
| Stentor2-12M | 12.3M | 26.61 | 480M | Stentor2 family |
| Stentor2-30M | 30.4M | 18.07 | 800M | This model |
Note on comparisons: Perplexity is directly comparable between models of the same generation — Stentor-12M v1 vs Stentor-30M v1, and Stentor2-12M vs Stentor2-30M. Cross-generation comparisons (e.g. Stentor-12M v1 vs Stentor2-12M, or Stentor-30M v1 vs Stentor2-30M) are not directly comparable due to different vocabularies, tokenizers, and training datasets.
BLiMP Evaluation Results
BLiMP (Benchmark of Linguistic Minimal Pairs) measures grammatical sensitivity by presenting 67 targeted minimal-pair contrasts — one grammatical sentence vs. one ungrammatical sentence — across a broad range of English syntactic phenomena. A score of 50% is chance; 100% is perfect.
| Metric | Value |
|---|---|
| Overall BLiMP Accuracy | 74.17% |
| ✅ Strong (≥ 85%) | 28 tasks |
| 🟡 Moderate (56–84%) | 26 tasks |
| ❌ Weak (≤ 55%) | 13 tasks |
✅ Strong Performance (≥ 85%)
Click to expand — 28 tasks
| Task | Accuracy | What it tests |
|---|---|---|
principle_A_case_1 |
100.00% | Reflexive pronouns must be locally bound by their antecedent (accusative case variant) |
sentential_negation_npi_licensor_present |
99.80% | Negative polarity items (any, ever) are licensed by sentential negation (not, never) |
anaphor_number_agreement |
98.80% | Reflexives and reciprocals must match their antecedent in grammatical number |
wh_vs_that_no_gap_long_distance |
98.50% | That (not a wh-word) is the correct complementizer in long-distance clauses without an extraction gap |
principle_A_domain_1 |
98.00% | Reflexives must be bound within their minimal local binding domain — set 1 |
determiner_noun_agreement_1 |
97.70% | Determiner (a/an/the) must match noun in number — basic cases |
existential_there_quantifiers_1 |
97.10% | Existential there requires an indefinite quantified NP subject (There are some cats, not There are the cats) |
wh_vs_that_no_gap |
95.80% | That (not wh-) complementizer is correct when no extraction gap is present in the clause |
principle_A_case_2 |
95.70% | Reflexive binding locality — nominative case variant |
determiner_noun_agreement_2 |
95.60% | Determiner-noun number agreement — additional test set |
anaphor_gender_agreement |
95.50% | Reflexives and reciprocals must match their antecedent in grammatical gender (herself vs. himself) |
regular_plural_subject_verb_agreement_1 |
94.00% | Subject-verb agreement with regular plural subjects (The dogs bark, not The dogs barks) |
determiner_noun_agreement_with_adjective_1 |
93.40% | Determiner agreement with noun when an adjective intervenes (a tall man, not an tall man) |
irregular_plural_subject_verb_agreement_2 |
91.10% | Subject-verb agreement when subject is an irregular plural (The mice run, not The mice runs) — set 2 |
determiner_noun_agreement_with_adj_2 |
90.90% | Determiner-noun agreement across an intervening adjective — set 2 |
passive_2 |
90.90% | Passive construction well-formedness — additional test set |
wh_questions_subject_gap_long_distance |
90.70% | Wh-movement leaving a subject gap across a clause boundary (Who did you say ___ left?) |
irregular_plural_subject_verb_agreement_1 |
90.30% | Subject-verb agreement with irregular plural subjects (The children run) — set 1 |
animate_subject_trans |
90.00% | Transitive verbs that semantically require an animate subject |
regular_plural_subject_verb_agreement_2 |
89.60% | Subject-verb agreement with regular plural subjects — additional test set |
determiner_noun_agreement_irregular_2 |
89.30% | Determiner-noun agreement with morphologically irregular nouns — set 2 |
wh_questions_subject_gap |
89.10% | Wh-movement with a gap in subject position (Who ___ left early?) |
irregular_past_participle_adjectives |
88.00% | Correct irregular past participle form used as an adjective (e.g., broken, not breaked) |
passive_1 |
88.00% | Passive constructions require the correct auxiliary (be) and past participle (was eaten, not was eat) |
distractor_agreement_relational_noun |
86.50% | Subject-verb agreement when a relational noun (e.g., the mother of the boys) intervenes as a distractor |
determiner_noun_agreement_irregular_1 |
86.30% | Determiner-noun agreement with morphologically irregular nouns — set 1 |
determiner_noun_agreement_with_adj_irregular_2 |
86.20% | Determiner-noun agreement across an adjective with irregular noun morphology — set 2 |
ellipsis_n_bar_2 |
85.30% | N-bar ellipsis using one as a pro-form (the red one for the red car) — set 2 |
🟡 Moderate Performance (56–84%)
Click to expand — 26 tasks
| Task | Accuracy | What it tests |
|---|---|---|
irregular_past_participle_verbs |
84.30% | Correct irregular past participle in verbal use (has eaten, not has eated) |
tough_vs_raising_2 |
84.20% | Distinguishing tough constructions from subject raising — set 2 |
determiner_noun_agreement_with_adj_irregular_1 |
83.60% | Determiner-noun agreement across an adjective with irregular noun morphology — set 1 |
transitive |
82.00% | Transitive verbs require an overt direct object (She ate the cake, not She ate) |
existential_there_subject_raising |
81.70% | Raising of the subject into an existential there construction (There seems to be a man) |
animate_subject_passive |
77.20% | Passive constructions with animate subjects in semantically restricted contexts |
coordinate_structure_constraint_object_extraction |
76.30% | Extracting an object out of a coordinate structure is blocked (*What did she buy ___ and a hat?) |
existential_there_object_raising |
75.30% | Existential there raising with an object (There seems to be a problem) |
wh_island |
75.10% | Extraction from an embedded wh-clause is blocked (*Who do you wonder what ___ bought?) |
ellipsis_n_bar_1 |
74.50% | N-bar ellipsis using one as a pro-form — set 1 |
intransitive |
73.40% | Intransitive verbs do not take a direct object (She arrived, not *She arrived the bus) |
adjunct_island |
73.20% | Wh-extraction out of an adjunct clause is blocked (*Who did she leave because she saw ___?) |
only_npi_licensor_present |
73.10% | Only can license negative polarity items within its scope (Only John has ever left) |
expletive_it_object_raising |
72.20% | Object raising with expletive it (It seems that she left, well-formed vs. ill-formed variants) |
drop_argument |
72.10% | Verbs that require their object cannot drop it (She devoured the cake, not *She devoured) |
wh_questions_object_gap |
71.30% | Wh-movement leaving a gap in object position (What did she buy ___?) |
superlative_quantifiers_1 |
70.60% | Superlative quantifiers (at most N, at least N) must appear in correct syntactic environments — set 1 |
superlative_quantifiers_2 |
69.90% | Superlative quantifiers used in correct syntactic environments — set 2 |
causative |
69.40% | Only certain verbs permit the causative alternation (She broke the vase → The vase broke) |
distractor_agreement_relative_clause |
66.10% | Subject-verb agreement when a relative clause with a different-number noun intervenes as a distractor |
principle_A_domain_2 |
64.60% | Reflexives must be locally bound — set 2 (more complex embedding environments) |
npi_present_1 |
63.90% | Negative polarity items (any, ever) must appear within the scope of a licensor — set 1 |
npi_present_2 |
62.40% | Negative polarity item licensing — additional test set |
sentential_negation_npi_scope |
61.90% | NPI must fall within the semantic scope of sentential negation, not outside it |
inchoative |
61.70% | Only certain verbs allow the inchoative alternation (The ice melted but *The chef melted) |
principle_A_c_command |
58.80% | Reflexive pronoun must be c-commanded by its antecedent within its binding domain |
❌ Weak Performance (≤ 55%)
Click to expand — 13 tasks
| Task | Accuracy | What it tests |
|---|---|---|
tough_vs_raising_1 |
54.00% | Distinguishing tough constructions from subject raising — set 1 |
principle_A_domain_3 |
53.70% | Reflexive binding locality — set 3 (complex embedded environments) |
coordinate_structure_constraint_complex_left_branch |
49.30% | The coordinate structure constraint blocks extraction from a complex left branch of a coordinate |
left_branch_island_simple_question |
49.20% | Extraction from the left branch of an NP is blocked in simple questions (*How tall is the ___ man?) |
left_branch_island_echo_question |
47.00% | Extraction from the left branch of an NP (e.g., an adjective) is blocked in echo questions |
wh_vs_that_with_gap |
43.10% | A wh-complementizer (not that) is required when an extraction gap is present in the embedded clause |
complex_NP_island |
41.50% | Extraction from inside a complex NP (e.g., a relative clause inside an NP) is blocked |
sentential_subject_island |
35.00% | Extraction from a sentential subject (*Who is that John left obvious?) is blocked |
principle_A_reconstruction |
34.80% | Reflexive binding applies at the reconstructed (LF) position, not the surface position |
only_npi_scope |
32.70% | NPI must be within the c-command scope of only, not outside it |
matrix_question_npi_licensor_present |
24.80% | NPI licensing in matrix (root) questions (Has she ever left? vs. Has she left ever?) |
existential_there_quantifiers_2 |
18.90% | Existential there with quantified NP subjects — more complex or marked cases |
wh_vs_that_with_gap_long_distance |
14.40% | Wh-complementizer is required (not that) when a long-distance extraction gap is present |
BLiMP Summary by Linguistic Category
| Domain | Avg. Score | Strengths | Weaknesses |
|---|---|---|---|
| Agreement | ~88% | Det-noun & number agreement (89–98%) | Distractor via relative clause (66%) |
| Anaphora / Binding | ~78% | Principle A case 1 (100%), domain 1 (98%), gender (95%) | Reconstruction (35%), domain 3 (54%) |
| NPI Licensing | ~64% | Sentential negation licensor (99%) | Matrix question (25%), wh-gap long-distance (14%) |
| Island Constraints | ~55% | Object extraction (76%), wh-island (75%) | Left branch (47–49%), sentential subject (35%) |
| Argument Structure | ~78% | Passive (88–91%), animate subject trans (90%) | Inchoative (62%), causative (69%) |
| Filler-Gap / Wh | ~71% | Subject gap long-distance (91%), no-gap (96%) | With-gap long-distance (14%), with-gap (43%) |
| Raising & Existential | ~74% | Quantifiers set 1 (97%), subject raising (82%) | Existential quantifiers set 2 (19%) |
| Ellipsis | ~80% | N-bar ellipsis set 2 (85%) | N-bar ellipsis set 1 (75%) |
Interpretation: Stentor2-30M shows substantially stronger grammatical intuitions than its 12M sibling across most categories, with particular gains in agreement (88% vs ~80%), anaphora, and argument structure. It retains the same structural weaknesses common to models of this scale: scope-based NPI licensing, island extraction constraints, and complementizer selection under long-distance movement. The overall 74.17% BLiMP score represents a meaningful improvement over Stentor2-12M's 68.95%.
Training Dynamics
The training run processed approximately 800M tokens across 2 epochs, stopping when the token budget was reached at approximately step 30,300 (well before the configured max_train_steps of 60,626).
Epoch 0 (~15,000 steps, 12,132.6s): Loss dropped steadily from ~4.0+ to below 3.1. Best checkpoint updated continuously, reaching 3.0486 at step 15,000. Epoch-end eval: loss 3.0470, ppl 21.05.
Epoch 1 (~15,300 additional steps, 12,168.3s): Continued improvement throughout. New bests recorded at nearly every eval interval. Token budget hit at approximately step 30,300. Final eval: loss 2.8944, ppl 18.07.
Throughput: ~45,000–46,000 tokens/sec average throughout training.
Total wall-clock time: ~6.75 hours.
Use Cases & Intended Uses
| Use Case | Suitability | Notes |
|---|---|---|
| Studying transformer training dynamics | ✅ High | Small enough to train/fine-tune on free compute |
| Tokenization efficiency research | ✅ High | 8K vs 32K vocab tradeoff directly observable |
| Speculative decoding experiments | ✅ High | Fast enough to serve as a draft model |
| Benchmarking CPU/edge inference latency | ✅ High | ~30MB in FP16, runs on any hardware |
| Testing quantization/conversion pipelines | ✅ High | GGUF, ONNX, INT8 pipeline validation |
| Teaching material for LLM courses | ✅ High | Architecture simple enough to trace by hand |
| Text continuation / creative prompting | ✅ High | Works well for basic continuation |
| Domain-specific fine-tuning research | ✅ High | Small enough to iterate rapidly |
| Factual Q&A | ❌ Not suitable | No reliable world knowledge |
| Production deployment | ❌ Not suitable | No safety tuning |
| Non-English text | ❌ Not suitable | TokenMonster vocab is English-only |
| Long-document tasks | ❌ Not suitable | 1,024 token context limit |
Out-of-Scope Uses
- User-facing applications of any kind — No safety filtering, no alignment, no factual reliability.
- Medical, legal, or financial advice — Cannot store or reason over specialized knowledge.
- Generating content about real people — Outputs mentioning real people are likely to be fabricated.
- Automated content pipelines — Output quality is insufficient for unreviewed publication.
- Non-English use — Vocabulary is English-only.
- Instruction following — This is a base model.
Ethical Considerations & Societal Impact
Inherited Data Biases
Trained on FineWeb-HQ and StenCore, filtered subsets of Common Crawl web data and PDFs. Inherits:
- Western-centric perspective — Educational content skews toward Western viewpoints.
- English monolingualism — Training data and vocabulary are English-only.
- Demographic underrepresentation — Groups underrepresented in English educational web content will be underrepresented in outputs.
No Safety Tuning
No RLHF, no DPO, no constitutional AI, no content filtering.
Positive Aspects
- Democratizing AI research — Trained entirely on free-tier Kaggle compute.
- Transparency — Full training hyperparameters, architecture details, and logs published.
- Minimal environmental footprint — ~6.75 hours of dual-GPU compute.
Inference Guide
Basic Generation
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("StentorLabs/Stentor2-30M", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("StentorLabs/Stentor2-30M", trust_remote_code=True)
model = model.to("cuda").eval()
def generate(prompt, max_new_tokens=50, temperature=0.9, top_p=0.65):
input_ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long).to(model.device)
attention_mask = torch.ones_like(input_ids)
with torch.inference_mode():
output = model.generate(
input_ids,
attention_mask=attention_mask,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_p=top_p,
repetition_penalty=1.15,
pad_token_id=tokenizer.pad_token_id,
)
new_ids = output[0][input_ids.shape[1]:].tolist()
return tokenizer.decode(new_ids).strip()
print(generate("The history of computing began"))
🚀 Free Inference — Try It Now
No GPU, no setup, no API key required.
StentorLabs hosts a free, unlimited inference demo for all Stentor models at:
🔗 https://huggingface.co/spaces/StentorLabs/StentorLabs-demo_space
| Mode | Description |
|---|---|
| Normal Generation | Standard text completion with adjustable temp, top-p, repetition penalty, and max tokens |
| Token Probability Explorer | Inspect per-token probability distributions as the model generates |
| Temperature Sweep | Generate the same prompt across multiple temperatures simultaneously |
| Head-to-Head Comparison | Run two Stentor models side-by-side on the same prompt |
Real Model Responses
📝 Sample outputs will be added shortly once initial generation runs are complete. In the meantime, you can try the model yourself at the free inference demo or load it locally using the Quick Start guide above.
For reference on what typical Stentor2 behavior looks like — including the academic register bias and PDF token artifacts — see the Stentor2-12M model card.
Quantization
FP16
model = AutoModelForCausalLM.from_pretrained("StentorLabs/Stentor2-30M", torch_dtype=torch.float16)
model = model.to("cuda")
Dynamic INT8 (CPU)
import torch
model_int8 = torch.quantization.quantize_dynamic(
model.to("cpu"),
{torch.nn.Linear},
dtype=torch.qint8,
)
Format Conversion
Convert to GGUF (for llama.cpp)
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && pip install -r requirements.txt
huggingface-cli download StentorLabs/Stentor2-30M --local-dir stentor2-30m
python convert_hf_to_gguf.py stentor2-30m/ \
--outfile stentor2-30m.gguf \
--outtype f16
./llama-quantize stentor2-30m.gguf stentor2-30m-q4_0.gguf q4_0
./llama-cli -m stentor2-30m-q4_0.gguf -p "The science of" -n 50
Convert to ONNX
pip install optimum[exporters]
optimum-cli export onnx \
--model StentorLabs/Stentor2-30M \
--task text-generation-with-past \
stentor2-30m-onnx/
Speculative Decoding
Stentor2-30M can serve as a fast draft model to accelerate inference from larger Llama-family target models.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
draft_model = AutoModelForCausalLM.from_pretrained(
"StentorLabs/Stentor2-30M", torch_dtype=torch.float16
).to("cuda")
target_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-1B",
torch_dtype=torch.float16,
device_map="auto"
)
target_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
inputs = target_tokenizer("Explain the concept of recursion", return_tensors="pt")
outputs = target_model.generate(
**inputs,
assistant_model=draft_model,
do_sample=True,
max_new_tokens=100
)
print(target_tokenizer.decode(outputs[0], skip_special_tokens=True))
Important caveat: Stentor2 uses a different vocabulary (8,064-token TokenMonster) than standard Llama models (32,000-token BPE). This vocabulary mismatch means the target model's acceptance rate may be lower than with a vocabulary-compatible draft model.
Bias, Risks & Limitations
- Prompt Relevance: Outputs frequently off-topic for complex prompts.
- Factual Accuracy: All factual claims should be treated as unreliable.
- Context Boundary: Hard limit of 1,024 tokens.
- English Bias: TokenMonster English vocabulary; other languages tokenize poorly.
- Training Data Bias: Inherits biases in English-language educational web and PDF text.
- Hallucination: May produce confident but fabricated content.
- No Alignment: No RLHF, no DPO, no constitutional training.
- Tokenizer Efficiency: 8K TokenMonster vocabulary produces more tokens per word than standard 32K BPE. This is expected and not a bug.
- Shared Tensor Warning:
Removed shared tensor {'lm_head.weight'}is expected behavior from tied word embeddings. Safe to ignore.
Related Work
Comparable Sub-50M Models
| Model | Parameters | Best Perplexity | BLiMP Accuracy | Training Data | Notes |
|---|---|---|---|---|---|
| Stentor2-30M (this model) | 30.4M | 18.07 | 74.17% | FineWeb-HQ + StenCore 800M tokens | Base model, TokenMonster vocab |
| Stentor2-12M | 12.3M | 26.61 | 68.95% | FineWeb-HQ + StenCore 480M tokens | Smaller Stentor2 sibling |
| Stentor-30M (v1) | 30.4M | 33.02 | — | FineWeb-Edu + Cosmopedia 600M | 21L, hidden 256, 32K vocab, 512ctx |
| Stentor-12M (v1) | 12.0M | 89.01 | — | FineWeb-Edu + Cosmopedia 200M | v1 baseline |
| TinyStories-33M | ~33M | varies | — | TinyStories (synthetic) | Story generation focused |
| Pythia-14M | 14M | varies (Pile) | — | The Pile 300B tokens | EleutherAI scaling baseline |
Comparison caveats: Perplexity is directly comparable between models of the same generation (v1-to-v1, Stentor2-to-Stentor2). Cross-generation comparisons (v1 vs Stentor2) are not directly comparable due to different vocabularies, tokenizers, and training datasets.
Related Research Papers
| Paper | Relevance |
|---|---|
| TinyStories — Eldan & Li, 2023 | Demonstrates meaningful generation from 1M–33M parameter models |
| Pythia — Biderman et al., 2023 | Systematic study of small model scaling |
| Scaling Laws — Kaplan et al., 2020 | Informs token budget decisions |
| Chinchilla — Hoffmann et al., 2022 | ~800M tokens for 30M params is broadly in the compute-optimal range |
| RoPE — Su et al., 2021 | Positional encoding used in this model |
| Speculative Decoding — Leviathan et al., 2023 | Primary use case for a fast draft model |
| T5 — Raffel et al., 2020 | Source of NFKC text normalization approach |
Environmental Impact
| Factor | Value |
|---|---|
| Hardware | 2× NVIDIA Tesla T4 |
| Active Training Duration | ~6.75 hours |
| Cloud Provider | Kaggle (free tier) |
| Compute Region | Western USA |
| Estimated Carbon | Minimal (< 1.0 kg CO₂e estimated) |
Citation
@misc{izumoto2026stentor2_30m,
title = {Stentor2-30M},
author = {Kai Izumoto},
year = {2026},
publisher = {StentorLabs},
howpublished = {\url{https://huggingface.co/StentorLabs/Stentor2-30M}},
note = {30.4M parameter LlamaForCausalLM base model trained on
FineWeb-HQ and StenCore with a TokenMonster 8K vocabulary.
Trained on 2x Tesla T4 GPUs for ~6.75 hours. Apache 2.0 license.}
}
Model Card Contact
Questions, benchmarks, or feedback: StentorLabs@gmail.com or open a discussion.
Made with ❤️ by StentorLabs
Democratizing AI through accessible, efficient models
- Downloads last month
- 53
Datasets used to train StentorLabs/Stentor2-30M
Collection including StentorLabs/Stentor2-30M
Papers for StentorLabs/Stentor2-30M
TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling
Fast Inference from Transformers via Speculative Decoding
Training Compute-Optimal Large Language Models
RoFormer: Enhanced Transformer with Rotary Position Embedding
Evaluation results
- Best Validation Loss on FineWeb-HQ (validation split)self-reported2.894
- Best Perplexity (at best checkpoint) on FineWeb-HQ (validation split)self-reported18.070
- Final Epoch Validation Loss on FineWeb-HQ (validation split)self-reported2.894
- Final Epoch Perplexity on FineWeb-HQ (validation split)self-reported18.070
- BLiMP Overall Accuracy on FineWeb-HQ (validation split)self-reported74.170