--- base_model: - SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B - DreadPoor/Aspire-8B-model_stock - DreadPoor/Rusted_Gold-8B-LINEAR - kromcomp/L3-Umbral-Mind-r128-LoRA - Yuma42/Llama3.1-IgneousIguana-8B - DreadPoor/ichor_1.1-8B-Model_Stock library_name: transformers tags: - mergekit - merge - autoquant - gguf --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B](https://huggingface.co/SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B) as a base. ### Models Merged The following models were included in the merge: * [DreadPoor/Aspire-8B-model_stock](https://huggingface.co/DreadPoor/Aspire-8B-model_stock) * [DreadPoor/Rusted_Gold-8B-LINEAR](https://huggingface.co/DreadPoor/Rusted_Gold-8B-LINEAR) + [kromcomp/L3-Umbral-Mind-r128-LoRA](https://huggingface.co/kromcomp/L3-Umbral-Mind-r128-LoRA) * [Yuma42/Llama3.1-IgneousIguana-8B](https://huggingface.co/Yuma42/Llama3.1-IgneousIguana-8B) * [DreadPoor/ichor_1.1-8B-Model_Stock](https://huggingface.co/DreadPoor/ichor_1.1-8B-Model_Stock) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: DreadPoor/Rusted_Gold-8B-LINEAR+kromcomp/L3-Umbral-Mind-r128-LoRA - model: DreadPoor/ichor_1.1-8B-Model_Stock - model: DreadPoor/Aspire-8B-model_stock - model: Yuma42/Llama3.1-IgneousIguana-8B merge_method: model_stock base_model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B normalize: false int8_mask: true dtype: bfloat16 ```