Tarek07's picture
Add files using upload-large-folder tool
9d4e09c verified
---
base_model:
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- huihui-ai/Llama-3.3-70B-Instruct-abliterated
- watt-ai/watt-tool-70B
- SicariusSicariiStuff/Negative_LLAMA_70B
- mlabonne/Hermes-3-Llama-3.1-70B-lorablated
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B)
* [watt-ai/watt-tool-70B](https://huggingface.co/watt-ai/watt-tool-70B)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
* [mlabonne/Hermes-3-Llama-3.1-70B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: SicariusSicariiStuff/Negative_LLAMA_70B
- model: watt-ai/watt-tool-70B
- model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
pad_to_multiple_of: 8
```