Violet 1B4 Completion (ONNX)
Model Summary
Violet is a GPT-NeoX language model trained primarily on period texts (1800–1899). This is the int8 quantized completion version of the model. The non-quantized version is here Violet 1b4. If you were looking for the Chat version, you should check out Violet 1b4 Chat and its ONNX quantized equivalent Violet 1b4 Chat ONNX.
It is intended for creative writing, roleplay, period-appropriate correspondence, and Victorian etiquette.
- Architecture:
GPTNeoXForCausalLM - Parameters: ~1.41B
- Context length: 4096
- Vocab size: 24014
- Tokenizer:
PreTrainedTokenizerFast
Intended Use
Good for
- Victorian-flavored narrative completions
Not good for
- Contemporary factual Q&A
- Medical/legal/financial advice
Known Issues / Limitations
- Ages and dates can be unreliable (even within 1800–1899).
- Because parts of the corpus were derived from OCR, occasional stray modern tokens may appear (e.g., “http”, “Google”, “Internet Archive”).
- Training data includes UK and US English from the era.
Notes
Violet is not the first LLM trained on a historical-only pretraining corpus; to the author’s knowledge that distinction belongs to TimeCapsuleLLM. Violet was developed independently, and differs in:
- Different (but somewhat overlapping) pretraining corpus and a different range of dates -- Violet focuses specifically on 1800-1899
- A custom Victorian tokenizer
Violet was built on a corpus spanning 1800–1899 sourced from Project Gutenberg, the Internet Archive, the British National Library, and other archives.
This project began as an attempt to build a local LLM without relying on copyrighted training sources. The author also values local models that can run on a user’s machine without sending data to the cloud.
Demo Resources
- HF Space: Transformers.js Demo
- CloudFlare Mirror: Transformers.js Demo
- Both of these are intended to use WebGPU and run local on your system -- No data is sent to the cloud.
Related repos
Zakarth/violet-1b4(base/completion)Zakarth/violet-1b4-chat-onnx(WebGPU INT8)
Prompt Format (Chat)
This model was trained to generate a mood line + assistant tag + response after <|violet_mood|>.
Use this structure:
The morning fog had scarcely lifted when
The model will then generate:
{response...}
Tokenization and Special Tokens
Violet 1b4 was trained on a custom tokenizer specific for Victorian text.
Recommended IDs for generation:
- eos_token_id: 0
- pad_token_id: 1
Special tokens used during training (typical IDs from training config):
- <|system|>: 24000
- <|user|>: 24001
- <|assistant|>: 24002
- <|violet_mood|>: 24005
!! Do not mix tokenizers from other Violet variants (e.g. 160M) with this model.
How to use (Transformers.js)
import {
AutoTokenizer,
AutoModelForCausalLM,
} from "@huggingface/transformers";
const repo = "Zakarth/violet-1b4-onnx";
// Load tokenizer
const tokenizer = await AutoTokenizer.from_pretrained(repo, {
use_fast: true,
});
// Load model (WebGPU if available, otherwise WASM)
const model = await AutoModelForCausalLM.from_pretrained(repo, {
device: "webgpu", // or "wasm" fallback
});
const prompt = `The morning fog had scarcely lifted when`;
// Tokenize (exact equivalent of add_special_tokens=False)
const inputs = await tokenizer(prompt, {
add_special_tokens: false,
return_attention_mask: true,
});
// Generate
const output = await model.generate({
input_ids: inputs.input_ids,
attention_mask: inputs.attention_mask,
max_new_tokens: 180,
do_sample: true,
temperature: 0.8,
top_p: 0.9,
top_k: 40,
repetition_penalty: 1.15,
eos_token_id: 0,
pad_token_id: 1,
});
// Slice off the prompt tokens (same as PyTorch)
const promptLength = inputs.input_ids[0].length;
const generatedIds = output[0].slice(promptLength);
// Decode with special tokens preserved
const text = tokenizer.decode(generatedIds, {
skip_special_tokens: false,
});
console.log(text);
Sample Outputs
The pirate's eyes gleamed with joy
as he took his seat at the table. He had put away nothing but a small piece of paper and a little silver cup. " That is a fortunate circumstance," said Mr. Dinsmore, looking out from his easy-chair." It will be quite sufficient for us; I am sure it would be better than any one else could have done." The news spread rapidly through the city, and all eyes turned toward the vessel, which lay at anchor under the lee of the shore. A few minutes later she was close to the shore, her sails flying in the wind
License
Model weights and code in this repository are released under CC0 1.0 (public domain dedication).
Artwork
violet.png is © @rose.grtqndl (Instagram). Used and redistributed with permission; copyright remains with the artist.
Contact
You may contact me on X or anywhere else by searching for my handle
Citation
@misc{violet2026,
author = Zakarth,
title = {Violet: Victorian Language Models},
year = {2026},
publisher = {HuggingFace},
url = {https://huggingface.co/Zakarth/violet-1b4-chat}
}
- Downloads last month
- 34
