StoryEngine-2B / README.md
SatorTenet's picture
Link dataset in model card
a0c8b11 verified
metadata
base_model: Qwen/Qwen3.5-2B
datasets:
  - SatorTenet/storyengine-dataset
library_name: transformers
pipeline_tag: text-generation
tags:
  - lora
  - sft
  - interactive-fiction
  - storytelling
  - qwen
license: apache-2.0
language:
  - en

StoryEngine-2B

StoryEngine-2B is a fine-tuned version of Qwen/Qwen3.5-2B for interactive fiction and guided story experiences.

The model guides users through immersive narrative experiences, presenting vivid scenes and meaningful choices at each step.

Model Details

  • Base model: Qwen/Qwen3.5-2B
  • Fine-tuning method: QLoRA (r=16, alpha=32)
  • Training data: 3,140 interactive fiction examples across multiple genres
  • Training hardware: NVIDIA GeForce GTX 1060 6GB
  • Training time: ~9.5 hours

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "SatorTenet/StoryEngine-2B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, dtype=torch.float16, device_map="auto")

messages = [
    {
        "role": "system",
        "content": (
            "You are StoryEngine — an interactive fiction model.\n"
            "Genre: Dark Fantasy | Tone: tense, mysterious\n"
            "Scene: 1/5\nVitality: 100 | Saga: 0"
        ),
    },
    {"role": "user", "content": "Start a new story."},
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.8, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

Ollama

ollama run storyengine:2b

Genres

The model was trained on stories spanning multiple genres including:

  • Dark Fantasy
  • Mythic Norse
  • Sci-Fi
  • Horror
  • and more

License

Apache 2.0 — same as the base model.