TMLM-Haiku-2
A 1M-parameter language model that speaks English, technically.
WARNING: This model was trained on a shoestring budget and a prayer.
It does not answer questions correctly. It does not follow instructions well.
It does, however, occasionally produce output that sounds profound until you read it twice.
Overview
TMLM-Haiku-2 is a tiny autoregressive language model with approximately one million parameters. That is not a typo. In an era where models are measured in billions, we went the other direction, mostly because we could.
It was trained on English text at a ratio of 100 tokens per parameter. The math is simple: 1M parameters × 100 tokens = ~100M total tokens. We split this budget deliberately: roughly two-thirds (~66.7M tokens) went into general pretraining, after which the resulting checkpoint was fine-tuned on instruction data using the remaining one-third (~33.3M tokens).
This approach lets us squeeze more signal out of every parameter. It does not make the model smart. It just makes it slightly less confused than it would have been otherwise.
Curiously, the pretrained checkpoint tends to produce more fluent output than the instruction-tuned version. It turns out that teaching a model to follow directions might actually make it worse at saying things. Who knew.
Efficiency Claim (The One We Are Allowed to Make)
At 100 tokens per parameter, TMLM-Haiku-2 is one of the most efficient models out there at this scale. If you were to compare it against another 1M parameter model trained on fewer tokens, these weights would likely outperform them.
We are not claiming state of the art. We are claiming that, for a model this small, we got more out of the data than most would. That is a modest victory, but we will take it.
Sample Outputs
Instruction-Tuned
User: Is food safe to eat?
Model: As an Age, in relation (a rush-rict), a decisios is the **exactly* estimate**
(each labeling at * *Crise*).
When you cannot find all faces (doinins one way tea t become flawed) are no
longer rartints like awarded, n but wheret dogs control crashed ttega y
Pretrained
User: A cat, commonly known as the
Model: primary current burden of transportation wasnt impossible. A recence includes
a personal crisis for participated populatably, to make sure technology
systems change from an illustrious phase orbit oncold, rock areas, along
with localizators outside again
We recommend treating these outputs as experimental prose. If you find meaning in them, that is probably more about you than the model.
Technical Specifications
| Attribute | Value |
|---|---|
| Parameters | ~1,000,000 |
| Language | English |
| Tokenization | Word-level |
| Architecture | Lightweight Transformer |
| Total Tokens | ~100M (100 tokens/param) |
| Pretraining Tokens | ~66.7M (2/3 of budget) |
| Instruction Tokens | ~33.3M (1/3 of budget) |
| Target Throughput | ~1M tokens/sec |
| License | MIT |
| Repository | https://huggingface.co/CompactAI/TMLM-Haiku-2 |
Getting Started
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "CompactAI/TMLM-Haiku-2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "A cat, commonly known as the"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50, do_sample=True, temperature=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Pro tip: Adjust temperature between 0.8 and 1.2 for optimal levels of confusion.
Reasonable Use Cases
- Generating creative writing prompts that nobody asked for
- Studying how small models fail in charming ways
- Populating game worlds with NPCs that speak in riddles
- Teaching students why bigger is not always better (Pun intended)
- Amusing yourself during long training runs
Unreasonable Use Cases
- Anything requiring factual accuracy
- Customer support automation
- Medical, legal, or financial advice (oh hell no)
- Replacing a search engine
- Expecting the model to know what it is talking about
Philosophy
TMLM-Haiku-2 exists because we wondered what would happen if we trained a very small model on a very large dataset and then asked it to talk. The answer, as you have seen, is complicated.
The training strategy was simple: allocate two-thirds of the token budget to broad pretraining, then use the remainder to nudge the model toward instruction following. This does not produce a capable assistant. It does produce a model that learned as much as it could, given the constraints.
This project is part of CompactAI, an ongoing effort to explore language modeling at the edge of feasibility. We believe that interesting things can happen when you remove the safety net of scale. Sometimes those things are useful. Sometimes they are just funny.
Contributing
We welcome:
- Bug reports, especially those accompanied by entertaining failure cases
- Prompts that coax unexpectedly poetic output from the model
- Research collaborations focused on ultra-small model dynamics
- Feedback on how to make a 1M parameter model slightly less confused Note: If its a bug we will include fixes in later stages of TMLM-Haiku & other varients if present Please do not submit pull requests that add more parameters. That defeats the purpose. Please.
Citation
@misc{tmlm-haiku-2-2026,
title={TMLM-Haiku-2: A 1M-Parameter English Language Model for Experimental Use},
author={CompactAI},
year={2026},
howpublished={\url{https://huggingface.co/CompactAI/TMLM-Haiku-2}},
note={Trained with hope. Deploy with caution.}
}
The model generates text. Whether that text means anything is a question for philosophers.
Train small. Expect less. Laugh anyway.