| --- |
| license: other |
| tags: |
| - llama |
| - llama-2 |
| --- |
| [Experimental model] |
| |
| This model is an experiment using the frankenstein script from |
| https://huggingface.co/chargoddard/llama2-22b |
| BLOCK_DIAGONAL = False |
| |
| Using: |
| https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16 |
| + |
| Then used https://huggingface.co/upstage/llama-30b-instruct-2048 |
| as donor model. |
| |
| It used 160GB of system ram to merge these models, they merge fast without swap. |
| |
| For prompt template and model information see [huginnV1](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16). |