--- base_model: - shisa-ai/shisa-v2-mistral-nemo-12b - nbeerbower/mistral-nemo-gutenberg-12B-v4 - natong19/Mistral-Nemo-Instruct-2407-abliterated - Elizezen/Himeyuri-v0.1-12B library_name: transformers tags: - mergekit - merge - linear language: - en - ja --- ![image/png](https://huggingface.co/yamatazen/LinearWriter-12B/resolve/main/LinearWriter-12B.png?download=true) # LinearWriter-12B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [shisa-ai/shisa-v2-mistral-nemo-12b](https://huggingface.co/shisa-ai/shisa-v2-mistral-nemo-12b) * [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4) * [natong19/Mistral-Nemo-Instruct-2407-abliterated](https://huggingface.co/natong19/Mistral-Nemo-Instruct-2407-abliterated) * [Elizezen/Himeyuri-v0.1-12B](https://huggingface.co/Elizezen/Himeyuri-v0.1-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: linear dtype: bfloat16 out_dtype: bfloat16 models: - model: natong19/Mistral-Nemo-Instruct-2407-abliterated # Uncensor parameters: weight: 1.0 - model: nbeerbower/mistral-nemo-gutenberg-12B-v4 # Writing parameters: weight: [0.25, 0.3, 0.5, 0.6, 0.75] - model: Elizezen/Himeyuri-v0.1-12B # Japanese parameters: weight: [0.25, 0.3, 0.6, 0.3, 0.25] - model: shisa-ai/shisa-v2-mistral-nemo-12b # Japanese parameters: weight: [0.25, 0.3, 0.5, 0.3, 0.25] ```