--- base_model: - NurtureAI/Meta-Llama-3-8B-Instruct-32k - Nitral-AI/Hathor-L3-8B-v.02 - grimjim/Llama-3-Luminurse-v0.2-OAS-8B - Sao10K/L3-8B-Stheno-v3.2 library_name: transformers tags: - mergekit - merge --- ![1718719796324.png](https://cdn-uploads.huggingface.co/production/uploads/65f158693196560d34495d54/6SuBtoTMP5Svgl0fmT9vt.png) *** ### L3-Inca-8B-v0.8 [L3-Inca-8B-v0.8](https://huggingface.co/Ppoyaa/L3-Inca-8B-v0.8) is a merge of the following models: * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [Nitral-AI/Hathor-L3-8B-v.02](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.02) * [grimjim/Llama-3-Luminurse-v0.2-OAS-8B](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B) using [NurtureAI/Meta-Llama-3-8B-Instruct-32k](https://huggingface.co/NurtureAI/Meta-Llama-3-8B-Instruct-32k) as the base. >[!IMPORTANT] > UPDATE: > Changed the merging method from **model_stock** to **ties** and made Stheno have the most weight and density. *** ### Quantized Models by [mradermacher](https://huggingface.co/mradermacher) • Static [L3-Inca-8B-v0.8-GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF) • Imatrix [L3-Inca-8B-v0.8-i1-GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-i1-GGUF) *** ### Configuration ```yaml models: - model: Sao10K/L3-8B-Stheno-v3.2 parameters: density: 0.85 weight: 0.5 - model: Nitral-AI/Hathor-L3-8B-v.02 parameters: density: 0.75 weight: 0.3 - model: grimjim/Llama-3-Luminurse-v0.2-OAS-8B parameters: density: 0.75 weight: 0.2 merge_method: ties base_model: NurtureAI/Meta-Llama-3-8B-Instruct-32k parameters: normalize: false int8_mask: true dtype: bfloat16 ```