File size: 959 Bytes
bd11ba3
 
 
 
 
 
 
 
 
 
 
6bf59f3
3abe104
 
bd11ba3
 
3abe104
 
 
 
fd70fac
 
3abe104
3030e68
 
6bf59f3
3030e68
 
 
 
 
 
6bf59f3
bd11ba3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: llama2
library_name: transformers
pipeline_tag: text-generation
base_model:
- unsloth/llama-2-13b
- layoric/llama-2-13b-code-alpaca
- vanillaOVO/WizardMath-13B-V1.0
tags:
- merge
---

---
license: llama2
library_name: transformers
pipeline_tag: text-generation
base_model:
- unsloth/llama-2-13b
- layoric/llama-2-13b-code-alpaca
- vanillaOVO/WizardMath-13B-V1.0
tags:
- merge
---
# AIM Paper Checkpoints Uploaded For Replication
This repository includes one of the checkpoints used in the paper "Activation-Informed Merging of Large Language Models". Specifics of this model are as follows:

- **Merging Method:** dare_ties
- **Models Used In Merging**
    - ***Base Model:*** unsloth/llama-2-13b
    - ***Code:*** layoric/llama-2-13b-code-alpaca
    - ***Math:*** vanillaOVO/WizardMath-13B-V1.0
- **AIM:** True

Benchmark results and paper details can be found at the official [GitHub](https://github.com/ahnobari/ActivationInformedMerging.git).