SatorTenet commited on
Commit
29238fa
·
verified ·
1 Parent(s): 0b5d488

Add model card

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen3.5-2B
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - lora
7
+ - sft
8
+ - interactive-fiction
9
+ - storytelling
10
+ - qwen
11
+ license: apache-2.0
12
+ language:
13
+ - en
14
+ ---
15
+
16
+ # StoryEngine-2B
17
+
18
+ **StoryEngine-2B** is a fine-tuned version of [Qwen/Qwen3.5-2B](https://huggingface.co/Qwen/Qwen3.5-2B) for interactive fiction and guided story experiences.
19
+
20
+ The model guides users through immersive narrative experiences, presenting vivid scenes and meaningful choices at each step.
21
+
22
+ ## Model Details
23
+
24
+ - **Base model**: Qwen/Qwen3.5-2B
25
+ - **Fine-tuning method**: QLoRA (r=16, alpha=32)
26
+ - **Training data**: 3,140 interactive fiction examples across multiple genres
27
+ - **Training hardware**: NVIDIA GeForce GTX 1060 6GB
28
+ - **Training time**: ~9.5 hours
29
+
30
+ ## Usage
31
+
32
+ ```python
33
+ from transformers import AutoTokenizer, AutoModelForCausalLM
34
+ import torch
35
+
36
+ model_id = "SatorTenet/StoryEngine-2B"
37
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
38
+ model = AutoModelForCausalLM.from_pretrained(model_id, dtype=torch.float16, device_map="auto")
39
+
40
+ messages = [
41
+ {
42
+ "role": "system",
43
+ "content": (
44
+ "You are StoryEngine — an interactive fiction model.\n"
45
+ "Genre: Dark Fantasy | Tone: tense, mysterious\n"
46
+ "Scene: 1/5\nVitality: 100 | Saga: 0"
47
+ ),
48
+ },
49
+ {"role": "user", "content": "Start a new story."},
50
+ ]
51
+
52
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
53
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
54
+
55
+ outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.8, top_p=0.9, do_sample=True)
56
+ print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
57
+ ```
58
+
59
+ ## Ollama
60
+
61
+ ```bash
62
+ ollama run storyengine:2b
63
+ ```
64
+
65
+ ## Genres
66
+
67
+ The model was trained on stories spanning multiple genres including:
68
+ - Dark Fantasy
69
+ - Mythic Norse
70
+ - Sci-Fi
71
+ - Horror
72
+ - and more
73
+
74
+ ## License
75
+
76
+ Apache 2.0 — same as the base model.