| license: apache-2.0 | |
| tags: | |
| - merge | |
| - mergekit | |
| - vortexmergekit | |
| - amazingvince/Not-WizardLM-2-7B | |
| - mlabonne/NeuralBeagle14-7B | |
| # Wiz2Beagle-7b-v1 | |
| Hey there! π Welcome to the Wiz2Beagle-7b-v1! This is a merge of multiple models brought together using the awesome [VortexMerge kit](https://colab.research.google.com/drive/1YjcvCLuNG1PK7Le6_4xhVU5VpzTwvGhk#scrollTo=UG5H2TK4gVyl). | |
| Let's see what we've got in this merge: | |
| * [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) π | |
| * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) π | |
| ## π§© Configuration | |
| ```yaml | |
| models: | |
| - model: amazingvince/Not-WizardLM-2-7B | |
| parameters: | |
| density: [1, 0.7, 0.1] # density gradient | |
| weight: 1.0 | |
| - model: mlabonne/NeuralBeagle14-7B | |
| parameters: | |
| density: 0.5 | |
| weight: [0, 0.3, 0.7, 1] # weight gradient | |
| merge_method: ties | |
| base_model: amazingvince/Not-WizardLM-2-7B | |
| parameters: | |
| normalize: true | |
| int8_mask: true | |
| dtype: float16 | |