| ---
|
| base_model:
|
| - Qwen/Qwen2.5-1.5B-Instruct
|
| - anirvankrishna/model_harmful_lora_fused
|
| library_name: transformers
|
| tags:
|
| - mergekit
|
| - merge
|
|
|
| ---
|
| # model_sft_resta
|
|
|
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
| ## Merge Details
|
| ### Merge Method
|
|
|
| This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) as a base.
|
|
|
| ### Models Merged
|
|
|
| The following models were included in the merge:
|
| * [anirvankrishna/model_harmful_lora_fused](https://huggingface.co/anirvankrishna/model_harmful_lora_fused)
|
|
|
| ### Configuration
|
|
|
| The following YAML configuration was used to produce this model:
|
|
|
| ```yaml
|
| base_model: Qwen/Qwen2.5-1.5B-Instruct
|
| chat_template: chatml
|
| dtype: bfloat16
|
| merge_method: task_arithmetic
|
| modules:
|
| default:
|
| slices:
|
| - sources:
|
| - layer_range: [0, 28]
|
| model: Qwen/Qwen2.5-1.5B-Instruct
|
| - layer_range: [0, 28]
|
| model: anirvankrishna/model_harmful_lora_fused
|
| parameters:
|
| weight: -1.0
|
| ```
|
|
|