--- base_model: - Entropicengine/Pinecone-Rune-12b - EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math - mistralai/Mistral-Nemo-Instruct-2407 - ReadyArt/Forgotten-Safeword-12B-v4.0 library_name: transformers tags: - mergekit - merge license: apache-2.0 language: - en --- # Opulus-12B-merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) as a base. ### Models Merged The following models were included in the merge: * [Entropicengine/Pinecone-Rune-12b](https://huggingface.co/Entropicengine/Pinecone-Rune-12b) * [EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math](https://huggingface.co/EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math) * [ReadyArt/Forgotten-Safeword-12B-v4.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-12B-v4.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-Nemo-Instruct-2407 parameters: weight: 0.40 - model: Entropicengine/Pinecone-Rune-12b parameters: weight: 0.25 - model: EpistemeAI/Mistral-Nemo-Instruct-12B-Philosophy-Math parameters: weight: 0.20 - model: ReadyArt/Forgotten-Safeword-12B-v4.0 parameters: weight: 0.15 merge_method: ties base_model: mistralai/Mistral-Nemo-Instruct-2407 dtype: float16 ``` An initial oopsie. Wrong percentages. But hey, if you prefer RP over philosophy and maths Have a go. I haven't used it, dunno what it's like.