| |
|
| | --- |
| | |
| | base_model: |
| | - princeton-nlp/gemma-2-9b-it-SimPO |
| | - TheDrummer/Gemmasutra-9B-v1 |
| | library_name: transformers |
| | tags: |
| | - mergekit |
| | - merge |
| | - roleplay |
| | - sillytavern |
| | - gemma2 |
| | language: |
| | - en |
| |
|
| | --- |
| | |
| |  |
| |
|
| | # QuantFactory/Ellaria-9B-GGUF |
| | This is quantized version of [tannedbum/Ellaria-9B](https://huggingface.co/tannedbum/Ellaria-9B) created using llama.cpp |
| |
|
| | # Original Model Card |
| |
|
| |
|
| | Same reliable approach as before. A good RP model and a suitable dose of SimPO are a match made in heaven. |
| |
|
| | ## SillyTavern |
| |
|
| | ## Text Completion presets |
| | ``` |
| | temp 0.9 |
| | top_k 30 |
| | top_p 0.75 |
| | min_p 0.2 |
| | rep_pen 1.1 |
| | smooth_factor 0.25 |
| | smooth_curve 1 |
| | ``` |
| | ## Advanced Formatting |
| |
|
| |
|
| | Context & Instruct Presets for Gemma [Here](https://huggingface.co/tannedbum/ST-Presets/tree/main) IMPORTANT ! |
| |
|
| | Instruct Mode: Enabled |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| | This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
| |
|
| | ### Merge Method |
| |
|
| | This model was merged using the SLERP merge method. |
| |
|
| | ### Models Merged |
| |
|
| | The following models were included in the merge: |
| | * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) |
| | * [TheDrummer/Gemmasutra-9B-v1](https://huggingface.co/TheDrummer/Gemmasutra-9B-v1) |
| |
|
| | ### Configuration |
| |
|
| | The following YAML configuration was used to produce this model: |
| |
|
| | ```yaml |
| | slices: |
| | - sources: |
| | - model: TheDrummer/Gemmasutra-9B-v1 |
| | layer_range: [0, 42] |
| | - model: princeton-nlp/gemma-2-9b-it-SimPO |
| | layer_range: [0, 42] |
| | merge_method: slerp |
| | base_model: TheDrummer/Gemmasutra-9B-v1 |
| | parameters: |
| | t: |
| | - filter: self_attn |
| | value: [0.2, 0.4, 0.6, 0.2, 0.4] |
| | - filter: mlp |
| | value: [0.8, 0.6, 0.4, 0.8, 0.6] |
| | - value: 0.4 |
| | dtype: bfloat16 |
| | |
| | |
| | ``` |
| |
|
| | Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum |
| |
|