Update README.md
Browse files
README.md
CHANGED
|
@@ -4,6 +4,8 @@
|
|
| 4 |
palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base.
|
| 5 |
Training took ~3.5 P100 gpu hours. It was trained on 15,000 openhermes dataset shuffled samples. It's a llama 2 model so you can use it with your favorite tools/frameworks.
|
| 6 |
|
|
|
|
|
|
|
| 7 |
### Prompt
|
| 8 |
```
|
| 9 |
On this article, we are going to learn
|
|
|
|
| 4 |
palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base.
|
| 5 |
Training took ~3.5 P100 gpu hours. It was trained on 15,000 openhermes dataset shuffled samples. It's a llama 2 model so you can use it with your favorite tools/frameworks.
|
| 6 |
|
| 7 |
+
|
| 8 |
+
Note: highly experimenal yet! Your feedback will make it better.
|
| 9 |
### Prompt
|
| 10 |
```
|
| 11 |
On this article, we are going to learn
|