| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | pipeline_tag: text-generation |
| | datasets: |
| | - appvoid/no-prompt-50k |
| | --- |
| |  |
| | # palmer |
| | ### a better base model |
| | palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks. |
| |
|
| | ### evaluation |
| | |Model| ARC_C| HellaSwag| PIQA| Winogrande| |
| | |------|-----|-----------|------|-------------| |
| | |tinyllama-2t| 0.2807| 0.5463| 0.7067| 0.5683| |
| | |palmer-001| 0.2807| 0.5524| 0.7106| 0.5896| |
| | |tinyllama-2.5t|0.3191|0.5896| 0.7307| 0.5872| |
| | |palmer-002|0.3242|**0.5956**|**0.7345**|0.5888| |
| | |palmer-002-ultra|**0.3319**| 0.5877 |0.7252|**0.6038**| |
| | |
| | This is a continuation on `palmer-x-002`. As of now, this is the best overall model. |
| | |
| | ### training |
| | Training took ~7.5 P100 gpu hours. It was trained on 50,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible. |
| | |
| | |
| | ### prompt |
| | ``` |
| | no prompt |
| | ``` |
| | <a href="https://ko-fi.com/appvoid" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 48px !important;width: 180px !important; filter: invert(70%);" ></a> |