aashish1904 commited on
Commit
e0d9d2a
·
verified ·
1 Parent(s): 16e7b35

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ base_model:
8
+ - mistralai/Mistral-Nemo-Base-2407
9
+ tags:
10
+ - text adventure
11
+ - roleplay
12
+ library_name: transformers
13
+
14
+ ---
15
+
16
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
17
+
18
+
19
+ # QuantFactory/Muse-12B-GGUF
20
+ This is quantized version of [LatitudeGames/Muse-12B](https://huggingface.co/LatitudeGames/Muse-12B) created using llama.cpp
21
+
22
+ # Original Model Card
23
+
24
+
25
+ ![image/jpeg](muse.jpg)
26
+
27
+ # Muse-12B
28
+
29
+ Muse brings an extra dimension to any tale—whether you're exploring a fantastical realm, court intrigue, or slice-of-life scenarios where a conversation can be as meaningful as a quest. While it handles adventure capably, Muse truly shines when character relationships and emotions are at the forefront, delivering impressive narrative coherence over long contexts.
30
+
31
+ If you want to easily try this model for free, you can do so at [https://aidungeon.com](https://aidungeon.com/).
32
+
33
+ We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Muse was created.
34
+
35
+ [Quantized GGUF weights can be downloaded here.](https://huggingface.co/LatitudeGames/Muse-12B-GGUF)
36
+
37
+ ## Model details
38
+
39
+ Muse 12B was trained using Mistral Nemo 12B as its foundation, with training occurring in three stages: SFT (supervised fine-tuning), followed by two distinct DPO (direct preference optimization) phases.
40
+
41
+ **SFT** - Various multi-turn datasets from a multitude of sources, combining text adventures of the kind used to finetune [our Wayfarer 12B model](https://huggingface.co/LatitudeGames/Wayfarer-12B), long emotional narratives and general roleplay, each carefully balanced and rewritten to be free of common AI cliches. A small single-turn instruct dataset was included to send a stronger signal during finetuning.
42
+
43
+ **DPO 1** - Gutenberg DPO, [credit to Jon Durbin](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) - This stage introduces human writing techniques, significantly enhancing the model's potential outputs, albeit trading some intelligence for the stylistic benefits of human-created text.
44
+
45
+ **DPO 2** - Reward Model User Preference Data, [detailed in our blog](https://blog.latitude.io/all-posts/synthetic-data-preference-optimization-and-reward-models) - This stage refines the Gutenberg stage's "wildness," restoring intelligence while maintaining enhanced writing quality and providing a final level of enhancement due to the reward model samples.
46
+
47
+ The result is a model that writes like no other: versatile across genres, natural in expression, and suited to emotional depth.
48
+
49
+ ## Inference
50
+
51
+ The Nemo architecture is known for being sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course.
52
+
53
+ ```
54
+ "temperature": 0.8,
55
+ "repetition_penalty": 1.05,
56
+ "min_p": 0.025
57
+ ```
58
+
59
+ ## Limitations
60
+
61
+ Muse was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other styles will work as well but may produce suboptimal results.
62
+
63
+ Average response lengths tend toward verbosity (1000+ tokens) due to the Gutenberg DPO influence, though this can be controlled through explicit instructions in the system prompt.
64
+
65
+ ## Prompt Format
66
+
67
+ ChatML was used during all training stages.
68
+
69
+ ```
70
+ <|im_start|>system
71
+ You're a masterful storyteller and gamemaster. Write in second person present tense (You are), crafting vivid, engaging narratives with authority and confidence.<|im_end|>
72
+ <|im_start|>user
73
+ > You peer into the darkness.
74
+ <|im_start|>assistant
75
+ You have been eaten by a grue.
76
+
77
+ GAME OVER
78
+ ```
79
+
80
+ ## Credits
81
+
82
+ Thanks to [Gryphe Padar](https://huggingface.co/Gryphe) for collaborating on this finetune with us!