TinyTim v2 1B IT
A reasoning model fine-tuned to produce Joycean-styled reasoning traces constrained to Finnegans Wake vocabulary, while maintaining factual correctness on standard benchmarks.
What it does
TinyTim takes questions and produces reasoning traces where the thinking process is expressed using vocabulary drawn from James Joyce's Finnegans Wake, while the final answers remain factually correct. It demonstrates that linguistic style and factual reasoning are separable โ a model can reason in any register.
Training
- Base model: google/gemma-3-1b-it
- Method: SFT using npcpy's
run_sftwith LoRA (r=128, alpha=256) - Data generation: Reasoning traces from multiple models (GPT-OSS 20B, Qwen3 4B Thinking, DeepSeek-R1 32B) were "Joyceanized" โ rewritten constrained to Wake vocabulary using a converter model
- Training data: TruthfulQA questions with Wake-styled reasoning traces
- Evaluation: AI2-ARC, with LLM-judge correctness scoring
Architecture
Follows the tinytim-r1 pipeline:
- Generate native reasoning trace from a question
- Rewrite the trace constrained to Finnegans Wake vocabulary
- SFT on (question, wake-trace) pairs
- Evaluate factual correctness despite stylistic constraint
Usage
from npcpy.ft.sft import load_sft_model, predict_sft
model, tokenizer = load_sft_model("npc-worldwide/tinytim-v2-1b-it")
response = predict_sft(model, tokenizer, "What is the capital of France?", max_new_tokens=300)
print(response)
Or via Ollama:
ollama run hf.co/npc-worldwide/tinytim-v2-1b-it
Part of the NPC Worldwide ecosystem
- Downloads last month
- 36