Small Models
Collection
A list of all small models (=<1B) that I have published. • 9 items • Updated
This model is a fine-tuned version of HuggingFaceTB/SmolLM2-135M-Instruct trained on the Fu01978/ao3_chat dataset.
It is designed to blend the instruction-following capabilities of SmolLM2 with the descriptive, narrative, and atmospheric prose styles commonly found in creative writing communities.
The model was fine-tuned for a short duration to "infuse" the base model with narrative flair without completely overwriting its general knowledge.
The model showed a rapid descent in the first 20 steps, eventually stabilizing around a loss of 2.5-2.6.
| Step | Training Loss |
|---|---|
| 5 | 3.572342 |
| 15 | 2.720610 |
| 30 | 2.497861 |
| 45 | 2.626326 |
| 60 | 2.708193 |
| 75 | 2.637797 |
This model uses the ChatML template. It is recommended to use the apply_chat_template method for best results.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Fu01978/SmolLM2-135M-Instruct-AO3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32, device_map="auto")
messages = [
{"role": "system", "content": "You are a creative writing assistant."},
{"role": "user", "content": "Write a scene about a rainy library."},
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=150, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
Base model
HuggingFaceTB/SmolLM2-135M