--- base_model: - LatitudeGames/Wayfarer-12B - yamatazen/EtherealAurora-12B-v2 - TheDrummer/Rocinante-12B-v1.1 - nbeerbower/Lyra4-Gutenberg-12B - MarinaraSpaghetti/NemoMix-Unleashed-12B - cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b - nothingiisreal/MN-12B-Celeste-V1.9 - anthracite-org/magnum-v2-12b library_name: transformers tags: - mergekit - merge --- # merge ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/632149f88c0da827c72dccde/onx2Y1k8HPSoVKt26xV28.gif) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yamatazen/EtherealAurora-12B-v2](https://huggingface.co/yamatazen/EtherealAurora-12B-v2) as a base. ### Models Merged The following models were included in the merge: * [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B) * [TheDrummer/Rocinante-12B-v1.1](https://huggingface.co/TheDrummer/Rocinante-12B-v1.1) * [nbeerbower/Lyra4-Gutenberg-12B](https://huggingface.co/nbeerbower/Lyra4-Gutenberg-12B) * [MarinaraSpaghetti/NemoMix-Unleashed-12B](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B) * [cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b) * [nothingiisreal/MN-12B-Celeste-V1.9](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9) * [anthracite-org/magnum-v2-12b](https://huggingface.co/anthracite-org/magnum-v2-12b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: LatitudeGames/Wayfarer-12B - model: MarinaraSpaghetti/NemoMix-Unleashed-12B - model: nothingiisreal/MN-12B-Celeste-V1.9 - model: TheDrummer/Rocinante-12B-v1.1 - model: anthracite-org/magnum-v2-12b - model: nbeerbower/Lyra4-Gutenberg-12B - model: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b merge_method: model_stock base_model: yamatazen/EtherealAurora-12B-v2 normalize: false int8_mask: true dtype: bfloat16 ```