AR-Chat-v1.0
A Mistral-7B merge using SLERP (Spherical Linear Interpolation) via mergekit.
π Sources
βοΈ Merge Config
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0.1, 0.3, 0.5, 0.7, 0.9]
- filter: mlp
value: [0.9, 0.7, 0.5, 0.3, 0.1]
- value: 0.5
dtype: bfloat16
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Gotab123/AR-Chat-v1.0
Merge model
this model