--- base_model: - TareksLab/Mithril-Prose-LLaMa-70B - TareksLab/Mithril-Base-LLaMa-70B - TareksLab/Mithril-ERP-LLaMa-70B - TareksLab/Mithril-RP-LLaMa-70B - TareksLab/Mithril-Thinker-Llama-70B - TareksLab/Mithril-Creative-LLaMa-70B library_name: transformers tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Multi-SLERP](https://goddard.blog/posts/multislerp-wow-what-a-cool-idea) merge method using [TareksLab/Mithril-Base-LLaMa-70B](https://huggingface.co/TareksLab/Mithril-Base-LLaMa-70B) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/Mithril-Prose-LLaMa-70B](https://huggingface.co/TareksLab/Mithril-Prose-LLaMa-70B) * [TareksLab/Mithril-ERP-LLaMa-70B](https://huggingface.co/TareksLab/Mithril-ERP-LLaMa-70B) * [TareksLab/Mithril-RP-LLaMa-70B](https://huggingface.co/TareksLab/Mithril-RP-LLaMa-70B) * [TareksLab/Mithril-Thinker-Llama-70B](https://huggingface.co/TareksLab/Mithril-Thinker-Llama-70B) * [TareksLab/Mithril-Creative-LLaMa-70B](https://huggingface.co/TareksLab/Mithril-Creative-LLaMa-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/Mithril-Prose-LLaMa-70B parameters: weight: 0.2 - model: TareksLab/Mithril-ERP-LLaMa-70B parameters: weight: 0.2 - model: TareksLab/Mithril-RP-LLaMa-70B parameters: weight: 0.2 - model: TareksLab/Mithril-Creative-LLaMa-70B parameters: weight: 0.2 - model: TareksLab/Mithril-Thinker-Llama-70B parameters: weight: 0.2 base_model: TareksLab/Mithril-Base-LLaMa-70B merge_method: multislerp parameters: normalize_weights: false eps: 1e-9 chat_template: llama3 dtype: bfloat16 tokenizer: source: base pad_to_multiple_of: 8 ```