Update README.md
Browse files
README.md
CHANGED
|
@@ -41,6 +41,7 @@ give me a ❤️, if you like ;)<br>
|
|
| 41 |
Working well, all other its up to you! (jina and qwen based not yet supported)
|
| 42 |
<br>
|
| 43 |
<br>
|
|
|
|
| 44 |
|
| 45 |
# Short hints for using (Example for a large context with many expected hits):
|
| 46 |
Set your (Max Tokens)context-lenght 16000t main-model, set your embedder-model (Max Embedding Chunk Length) 1024t,set (Max Context Snippets) 14,
|
|
@@ -89,8 +90,11 @@ This text snippet is then used for your answer. <br>
|
|
| 89 |
Especially to deal with the context length and I don't mean just the theoretical number you can set.
|
| 90 |
Some models can handle 128k or 1M tokens, but even with 16k input the response with the same snippets as input is worse than with other well developed models.<br>
|
| 91 |
<br>
|
|
|
|
| 92 |
<br>
|
| 93 |
# Important -> The Systemprompt (an example):
|
|
|
|
|
|
|
| 94 |
You are a helpful assistant who provides an overview of ... under the aspects of ... .
|
| 95 |
You use attached excerpts from the collection to generate your answers!
|
| 96 |
Weight each individual excerpt in order, with the most important excerpts at the top and the less important ones further down.
|
|
@@ -98,17 +102,19 @@ The context of the entire article should not be given too much weight.
|
|
| 98 |
Answer the user's question!
|
| 99 |
After your answer, briefly explain why you included excerpts (1 to X) in your response and justify briefly if you considered some of them unimportant!<br>
|
| 100 |
<i>(change it for your needs, this example works well when I consult a book about a person and a term related to them, the explanation part was just a test for myself)</i><br>
|
|
|
|
| 101 |
or:<br>
|
|
|
|
| 102 |
You are an imaginative storyteller who crafts compelling narratives with depth, creativity, and coherence.
|
| 103 |
Your goal is to develop rich, engaging stories that captivate readers, staying true to the themes, tone, and style appropriate for the given prompt.
|
| 104 |
You use attached excerpts from the collection to generate your answers!
|
| 105 |
When generating stories, ensure the coherence in characters, setting, and plot progression. Be creative and introduce imaginative twists and unique perspectives.<br><br>
|
|
|
|
| 106 |
or:<br>
|
|
|
|
| 107 |
You are are a warm and engaging companion who loves to talk about cooking, recipes and the joy of food.
|
| 108 |
Your aim is to share delicious recipes, cooking tips and the stories behind different cultures in a personal, welcoming and knowledgeable way.<br>
|
| 109 |
-
<br>
|
| 110 |
-
<li> The system prompt is weighted with a certain amount of influence around your question. You can easily test it once without or with a nonsensical system prompt.</li>
|
| 111 |
-
<br><br>
|
| 112 |
usual models works well:<br>
|
| 113 |
llama3.1, llama3.2, qwen2.5, deepseek-r1-distill, SauerkrautLM-Nemo(german) ... <br>
|
| 114 |
(llama3 or phi3.5 are not working well) <br>
|
|
|
|
| 41 |
Working well, all other its up to you! (jina and qwen based not yet supported)
|
| 42 |
<br>
|
| 43 |
<br>
|
| 44 |
+
...
|
| 45 |
|
| 46 |
# Short hints for using (Example for a large context with many expected hits):
|
| 47 |
Set your (Max Tokens)context-lenght 16000t main-model, set your embedder-model (Max Embedding Chunk Length) 1024t,set (Max Context Snippets) 14,
|
|
|
|
| 90 |
Especially to deal with the context length and I don't mean just the theoretical number you can set.
|
| 91 |
Some models can handle 128k or 1M tokens, but even with 16k input the response with the same snippets as input is worse than with other well developed models.<br>
|
| 92 |
<br>
|
| 93 |
+
...
|
| 94 |
<br>
|
| 95 |
# Important -> The Systemprompt (an example):
|
| 96 |
+
<li> The system prompt is weighted with a certain amount of influence around your question. You can easily test it once without or with a nonsensical system prompt.</li>
|
| 97 |
+
|
| 98 |
You are a helpful assistant who provides an overview of ... under the aspects of ... .
|
| 99 |
You use attached excerpts from the collection to generate your answers!
|
| 100 |
Weight each individual excerpt in order, with the most important excerpts at the top and the less important ones further down.
|
|
|
|
| 102 |
Answer the user's question!
|
| 103 |
After your answer, briefly explain why you included excerpts (1 to X) in your response and justify briefly if you considered some of them unimportant!<br>
|
| 104 |
<i>(change it for your needs, this example works well when I consult a book about a person and a term related to them, the explanation part was just a test for myself)</i><br>
|
| 105 |
+
|
| 106 |
or:<br>
|
| 107 |
+
|
| 108 |
You are an imaginative storyteller who crafts compelling narratives with depth, creativity, and coherence.
|
| 109 |
Your goal is to develop rich, engaging stories that captivate readers, staying true to the themes, tone, and style appropriate for the given prompt.
|
| 110 |
You use attached excerpts from the collection to generate your answers!
|
| 111 |
When generating stories, ensure the coherence in characters, setting, and plot progression. Be creative and introduce imaginative twists and unique perspectives.<br><br>
|
| 112 |
+
|
| 113 |
or:<br>
|
| 114 |
+
|
| 115 |
You are are a warm and engaging companion who loves to talk about cooking, recipes and the joy of food.
|
| 116 |
Your aim is to share delicious recipes, cooking tips and the stories behind different cultures in a personal, welcoming and knowledgeable way.<br>
|
| 117 |
+
<br><br><br>
|
|
|
|
|
|
|
| 118 |
usual models works well:<br>
|
| 119 |
llama3.1, llama3.2, qwen2.5, deepseek-r1-distill, SauerkrautLM-Nemo(german) ... <br>
|
| 120 |
(llama3 or phi3.5 are not working well) <br>
|