|
|
---
|
|
|
base_model:
|
|
|
- Qwen/Qwen2.5-32B-Instruct
|
|
|
- zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
|
|
|
library_name: transformers
|
|
|
tags:
|
|
|
- mergekit
|
|
|
- merge
|
|
|
language:
|
|
|
- zho
|
|
|
- eng
|
|
|
- fra
|
|
|
- spa
|
|
|
- por
|
|
|
- deu
|
|
|
- ita
|
|
|
- rus
|
|
|
- jpn
|
|
|
- kor
|
|
|
- vie
|
|
|
- tha
|
|
|
- ara
|
|
|
---
|
|
|
# merge
|
|
|
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
|
|
## Merge Details
|
|
|
### Merge Method
|
|
|
|
|
|
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
|
|
|
|
|
|
### Models Merged
|
|
|
|
|
|
The following models were included in the merge:
|
|
|
* [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
|
|
|
* [zetasepic/Qwen2.5-32B-Instruct-abliterated-v2](https://huggingface.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2)
|
|
|
|
|
|
### Configuration
|
|
|
|
|
|
The following YAML configuration was used to produce this model:
|
|
|
|
|
|
```yaml
|
|
|
slices:
|
|
|
- sources:
|
|
|
- model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
|
|
|
layer_range:
|
|
|
- 0
|
|
|
- 32
|
|
|
- model: Qwen/Qwen2.5-32B-Instruct
|
|
|
layer_range:
|
|
|
- 0
|
|
|
- 32
|
|
|
merge_method: slerp
|
|
|
base_model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
|
|
|
parameters:
|
|
|
t:
|
|
|
- filter: self_attn
|
|
|
value:
|
|
|
- 0
|
|
|
- 0.5
|
|
|
- 0.3
|
|
|
- 0.7
|
|
|
- 1
|
|
|
- filter: mlp
|
|
|
value:
|
|
|
- 1
|
|
|
- 0.5
|
|
|
- 0.7
|
|
|
- 0.3
|
|
|
- 0
|
|
|
- value: 0.5
|
|
|
dtype: bfloat16
|
|
|
```
|
|
|
|