File size: 1,318 Bytes
783c268
 
aa2b6c8
 
 
 
 
 
 
 
 
 
 
3a59332
aa2b6c8
3a59332
aa2b6c8
1a98591
 
550a0dc
62713cd
550a0dc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
https://huggingface.co/Undi95/MistralThinker-GGUF/discussions/1

Those repo are public because I hit the private storage limit, but feel free to try.
This model use the Mistral V7 prompt format.

It was trained on DeepSeek R1 RP log and character card, and some funny shit.

Default system prompt: "You are MistralThinker, a Large Language Model (LLM) created by Undi.\nYour knowledge base was last updated on 2023-10-01. Current date: {date}.\n\nWhen unsure, state you don't know."

I recommand you putting information about the persona and yourself in the system prompt to let the magic happen.

I sadly have a problem with the prompt format, in the tokenizer_config.json

I try to recreate what DeepSeek have done with their distill : they added `<think>` at the beginning of each assistant reply and cut off the thinking part in the context.

I did the same, but on my side, the first `<think>` don't appear using "Chat completion".

Other than that, the model seem fully functionnal, feel free to try, but be sure to prefill `<think>` one way or another.

Here's an exemple where the character card contain `You're roleplaying as a hot 35 years old motherly MILF`, and a custom system prompt.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/FSkA1aM1wgPNpy3J4NrmI.png)