| base_model: yamatazen/EsotericKnowledge-24B | |
| base_model_relation: quantized | |
| library_name: transformers | |
| tags: | |
| - mergekit | |
| - merge | |
| - chatml | |
|  | |
| This is my first 24B merge. | |
| # merge | |
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
| ## Merge Details | |
| ### Merge Method | |
| This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [arcee-ai/Arcee-Blitz](https://huggingface.co/arcee-ai/Arcee-Blitz) as a base. | |
| ### Models Merged | |
| The following models were included in the merge: | |
| * [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) | |
| * [ReadyArt/Forgotten-Safeword-24B-v4.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-v4.0) | |
| * [lars1234/Mistral-Small-24B-Instruct-2501-writer](https://huggingface.co/lars1234/Mistral-Small-24B-Instruct-2501-writer) | |
| ### Configuration | |
| The following YAML configuration was used to produce this model: | |
| ```yaml | |
| base_model: arcee-ai/Arcee-Blitz | |
| models: | |
| - model: ReadyArt/Forgotten-Safeword-24B-v4.0 | |
| parameters: | |
| density: 0.75 | |
| weight: 0.8 | |
| - model: lars1234/Mistral-Small-24B-Instruct-2501-writer | |
| parameters: | |
| density: 0.6 | |
| weight: 0.5 | |
| - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b | |
| parameters: | |
| density: 0.5 | |
| weight: 0.3 | |
| merge_method: ties | |
| dtype: bfloat16 | |
| parameters: | |
| normalize: true | |
| tokenizer: | |
| source: union | |
| ``` | |