ThalisAI commited on
Commit
481e42a
·
verified ·
1 Parent(s): e5fd6f7

Add Ollama Modelfile with Llama 3.1 chat template fix

Browse files
Files changed (1) hide show
  1. Modelfile +39 -0
Modelfile ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Modelfile for ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic
2
+ #
3
+ # This model is based on Llama 3.1 architecture and requires the Llama 3.1
4
+ # chat template instead of the DeepSeek native template. The default template
5
+ # from the GGUF metadata uses DeepSeek's fullwidth Unicode special tokens
6
+ # which are not correctly handled by the Llama BPE tokenizer in some backends.
7
+ #
8
+ # Usage:
9
+ # ollama create deepseek-r1-70b-heretic -f DeepSeek-R1-Distill-Llama-70B-heretic.Modelfile
10
+ # ollama run deepseek-r1-70b-heretic "Hello!"
11
+ #
12
+ # To use a different quantization, change the FROM line:
13
+ # FROM hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
14
+ # FROM hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q3_K_M
15
+
16
+ FROM hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q6_K
17
+
18
+ TEMPLATE """{{- if .System }}<|start_header_id|>system<|end_header_id|>
19
+
20
+ {{ .System }}<|eot_id|>{{ end }}
21
+ {{- range $i, $_ := .Messages }}
22
+ {{- $last := eq (len (slice $.Messages $i)) 1 }}
23
+ {{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
24
+
25
+ {{ .Content }}<|eot_id|>
26
+ {{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
27
+
28
+ {{- if .Content }}{{ .Content }}
29
+ {{- end }}{{- if not $last }}<|eot_id|>{{- end }}
30
+ {{- end }}
31
+ {{- if and $last (ne .Role "assistant") }}<|start_header_id|>assistant<|end_header_id|>
32
+
33
+ {{ end }}
34
+ {{- end }}"""
35
+
36
+ PARAMETER stop <|eot_id|>
37
+ PARAMETER stop <|end_of_text|>
38
+ PARAMETER temperature 0.6
39
+ PARAMETER top_p 0.95