metadata
license: apache-2.0
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- merge
- mergekit
- lazymergekit
- mistral
- hermes
- dpo
SUONG-2
This is a merge of two leading Hermes models created using LazyMergekit, combining OpenHermes's robust capabilities with Nous-Hermes-DPO's refined instruction following.
About Me
I'm David Soeiro-Vuong, a third-year Computer Science student working as an apprentice at TW3 Partners, a company specialized in Generative AI. Passionate about artificial intelligence and language models optimization, I focus on creating efficient model merges that balance performance and capabilities.
Merge Details
Merge Method
This model uses SLERP (Spherical Linear Interpolation) with carefully tuned parameters:
- Progressive attention layer fusion patterns
- Balanced MLP layer transitions
- bfloat16 format for efficient memory usage
- Full layer utilization for maximum capability retention
Models Merged
Configuration
slices:
- sources:
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [0, 32]
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16