Update README.md
Browse files
README.md
CHANGED
|
@@ -136,6 +136,8 @@ Finally, Juliet spoke up. “Maybe we could run a layer interleaving model merge
|
|
| 136 |
|
| 137 |
Sections that are the token-count equivalent of a few paragraphs in length tend to work best.
|
| 138 |
|
|
|
|
|
|
|
| 139 |
Talosian is _not_ a user <-> assistant chat-formatted model. All prompting should be done as completions (for example, in text-generation-webui's Notebook mode.)
|
| 140 |
|
| 141 |
## Generation Parameters
|
|
|
|
| 136 |
|
| 137 |
Sections that are the token-count equivalent of a few paragraphs in length tend to work best.
|
| 138 |
|
| 139 |
+
When in doubt, make your prompt more verbose and detailed. Talosian can assist with writing its own prompts if you write a brief starter sentence and let it extrapolate accordingly. Be sure to include a sentence that briefly explains the section header tags in your prompt, as in the above example.
|
| 140 |
+
|
| 141 |
Talosian is _not_ a user <-> assistant chat-formatted model. All prompting should be done as completions (for example, in text-generation-webui's Notebook mode.)
|
| 142 |
|
| 143 |
## Generation Parameters
|