File size: 3,798 Bytes
37d04db | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
base_model:
- v000000/NM-12B-Lyris-dev-3
- ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
library_name: transformers
tags:
- mergekit
- merge
---
[](https://hf.co/QuantFactory)
# QuantFactory/Insanity-GGUF
This is quantized version of [Nohobby/Insanity](https://huggingface.co/Nohobby/Insanity) created using llama.cpp
# Original Model Card
# insanity
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1) as a base.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: natong19/Mistral-Nemo-Instruct-2407-abliterated
- model: Fizzarolli/MN-12b-Sunrose
parameters:
density: 0.5
weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
parameters:
density: [0.35, 0.65, 0.5, 0.65, 0.35]
weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
merge_method: dare_ties
base_model: natong19/Mistral-Nemo-Instruct-2407-abliterated
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
name: uncen
---
models:
- model: unsloth/Mistral-Nemo-Instruct-2407
- model: NeverSleep/Lumimaid-v0.2-12B
parameters:
density: 0.5
weight: [0.139, 0.208, 0.139, 0.208, 0.139]
- model: nbeerbower/mistral-nemo-cc-12B
parameters:
density: [0.65, 0.35, 0.5, 0.35, 0.65]
weight: [0.01823, -0.01647, 0.01422, -0.01975, 0.01128]
- model: nbeerbower/mistral-nemo-bophades-12B
parameters:
density: [0.35, 0.65, 0.5, 0.65, 0.35]
weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
merge_method: della
base_model: unsloth/Mistral-Nemo-Instruct-2407
parameters:
epsilon: 0.04
lambda: 1.05
normalize: false
int8_mask: true
dtype: bfloat16
name: conv
---
models:
- model: unsloth/Mistral-Nemo-Base-2407
- model: elinas/Chronos-Gold-12B-1.0
parameters:
density: 0.9
gamma: 0.01
weight: [0.139, 0.208, 0.208, 0.139, 0.139]
- model: shuttleai/shuttle-2.5-mini
parameters:
density: 0.9
gamma: 0.01
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
- model: Epiculous/Violet_Twilight-v0.2
parameters:
density: 0.9
gamma: 0.01
weight: [0.139, 0.139, 0.208, 0.208, 0.139]
merge_method: breadcrumbs_ties
base_model: unsloth/Mistral-Nemo-Base-2407
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
name: chatml
---
models:
- model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
parameters:
weight: [0.2, 0.3, 0.2, 0.3, 0.2]
density: [0.45, 0.55, 0.45, 0.55, 0.45]
- model: chatml
parameters:
weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
density: [0.6, 0.4, 0.5, 0.4, 0.6]
- model: uncen
parameters:
density: [0.6, 0.4, 0.5, 0.4, 0.6]
weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
- model: conv
parameters:
weight: [0.208, 0.139, 0.139, 0.139, 0.208]
density: [0.7]
- model: v000000/NM-12B-Lyris-dev-3
parameters:
weight: [0.33]
density: [0.45, 0.55, 0.45, 0.55, 0.45]
merge_method: della_linear
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
parameters:
epsilon: 0.04
lambda: 1.05
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
tokenizer_source: base
```
|