TinyTim v2 1B IT

A reasoning model fine-tuned to produce Joycean-styled reasoning traces constrained to Finnegans Wake vocabulary, while maintaining factual correctness on standard benchmarks.

What it does

TinyTim takes questions and produces reasoning traces where the thinking process is expressed using vocabulary drawn from James Joyce's Finnegans Wake, while the final answers remain factually correct. It demonstrates that linguistic style and factual reasoning are separable โ€” a model can reason in any register.

Training

  • Base model: google/gemma-3-1b-it
  • Method: SFT using npcpy's run_sft with LoRA (r=128, alpha=256)
  • Data generation: Reasoning traces from multiple models (GPT-OSS 20B, Qwen3 4B Thinking, DeepSeek-R1 32B) were "Joyceanized" โ€” rewritten constrained to Wake vocabulary using a converter model
  • Training data: TruthfulQA questions with Wake-styled reasoning traces
  • Evaluation: AI2-ARC, with LLM-judge correctness scoring

Architecture

Follows the tinytim-r1 pipeline:

  1. Generate native reasoning trace from a question
  2. Rewrite the trace constrained to Finnegans Wake vocabulary
  3. SFT on (question, wake-trace) pairs
  4. Evaluate factual correctness despite stylistic constraint

Usage

from npcpy.ft.sft import load_sft_model, predict_sft

model, tokenizer = load_sft_model("npc-worldwide/tinytim-v2-1b-it")
response = predict_sft(model, tokenizer, "What is the capital of France?", max_new_tokens=300)
print(response)

Or via Ollama:

ollama run hf.co/npc-worldwide/tinytim-v2-1b-it

Part of the NPC Worldwide ecosystem

  • npcpy โ€” the framework used for training and inference
  • npcsh โ€” the shell for interacting with NPC agents
Downloads last month
36
Safetensors
Model size
1.0B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for npc-worldwide/tinytim-v2-1b-it

Quantized
(183)
this model