| | --- |
| | base_model: |
| | - unsloth/DeepSeek-R1-Distill-Qwen-7B |
| | - Qwen/Qwen2.5-Math-7B-Instruct |
| | - nvidia/AceMath-7B-Instruct |
| | library_name: transformers |
| | tags: |
| | - mergekit |
| | - merge |
| |
|
| | --- |
| | # merged-model-ACE |
| |
|
| | This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
| |
|
| | ## Merge Details |
| | ### Merge Method |
| |
|
| | This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B) as a base. |
| |
|
| | ### Models Merged |
| |
|
| | The following models were included in the merge: |
| | * [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) |
| | * [nvidia/AceMath-7B-Instruct](https://huggingface.co/nvidia/AceMath-7B-Instruct) |
| |
|
| | ### Configuration |
| |
|
| | The following YAML configuration was used to produce this model: |
| |
|
| | ```yaml |
| | # merge_ties.yml |
| | |
| | # 1. Overall merge method: TIES (sign-elect sparse task arithmetic) |
| | merge_method: ties |
| | |
| | # 2. Base model (all task vectors are computed relative to this checkpoint) |
| | base_model: unsloth/DeepSeek-R1-Distill-Qwen-7B |
| | |
| | # 3. Full models to merge (base first, then others) |
| | models: |
| | - model: unsloth/DeepSeek-R1-Distill-Qwen-7B # base has no extra params |
| | - model: nvidia/AceMath-7B-Instruct |
| | parameters: |
| | weight: 0.7 |
| | density: 0.7 |
| | - model: Qwen/Qwen2.5-Math-7B-Instruct |
| | parameters: |
| | weight: 0.3 |
| | density: 0.7 |
| | |
| | # 4. Global merge parameters |
| | parameters: |
| | normalize: true # normalize weights across models |
| | int8_mask: true # mask small values when using int8 backing |
| | |
| | # 5. Data type for merged tensors |
| | dtype: bfloat16 |
| | |
| | ``` |
| |
|