File size: 2,003 Bytes
29238fa
 
a0c8b11
 
29238fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
base_model: Qwen/Qwen3.5-2B
datasets:
- SatorTenet/storyengine-dataset
library_name: transformers
pipeline_tag: text-generation
tags:
- lora
- sft
- interactive-fiction
- storytelling
- qwen
license: apache-2.0
language:
- en
---

# StoryEngine-2B

**StoryEngine-2B** is a fine-tuned version of [Qwen/Qwen3.5-2B](https://huggingface.co/Qwen/Qwen3.5-2B) for interactive fiction and guided story experiences.

The model guides users through immersive narrative experiences, presenting vivid scenes and meaningful choices at each step.

## Model Details

- **Base model**: Qwen/Qwen3.5-2B
- **Fine-tuning method**: QLoRA (r=16, alpha=32)
- **Training data**: 3,140 interactive fiction examples across multiple genres
- **Training hardware**: NVIDIA GeForce GTX 1060 6GB
- **Training time**: ~9.5 hours

## Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "SatorTenet/StoryEngine-2B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, dtype=torch.float16, device_map="auto")

messages = [
    {
        "role": "system",
        "content": (
            "You are StoryEngine — an interactive fiction model.\n"
            "Genre: Dark Fantasy | Tone: tense, mysterious\n"
            "Scene: 1/5\nVitality: 100 | Saga: 0"
        ),
    },
    {"role": "user", "content": "Start a new story."},
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.8, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
```

## Ollama

```bash
ollama run storyengine:2b
```

## Genres

The model was trained on stories spanning multiple genres including:
- Dark Fantasy
- Mythic Norse
- Sci-Fi
- Horror
- and more

## License

Apache 2.0 — same as the base model.