prototype-0.4x229 / README.md
bruhzair's picture
Upload folder using huggingface_hub
4d4cefd verified
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x229
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--Ppoyaa--MythoNemo-L3.1-70B-v1.0/snapshots/faaa4e992764eb4667b8f541dcf75ce8b7aaadcc as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
* /workspace/cache/models--marcelbinz--Llama-3.1-Centaur-70B/snapshots/7e42dbd2e967200cee4b3eafe523c154edc2e766
* /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
* /workspace/cache/models--AstroMLab--AstroSage-70B/snapshots/86496984f418ef5a6825f2c16983595e7f7d5930
* /workspace/cache/models--watt-ai--watt-tool-70B/snapshots/dbe19344ec6ee4b9e1636e9e6ce24fc6a85a725e
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
- model: /workspace/cache/models--marcelbinz--Llama-3.1-Centaur-70B/snapshots/7e42dbd2e967200cee4b3eafe523c154edc2e766
- model: /workspace/cache/models--AstroMLab--AstroSage-70B/snapshots/86496984f418ef5a6825f2c16983595e7f7d5930
- model: /workspace/cache/models--watt-ai--watt-tool-70B/snapshots/dbe19344ec6ee4b9e1636e9e6ce24fc6a85a725e
base_model: /workspace/cache/models--Ppoyaa--MythoNemo-L3.1-70B-v1.0/snapshots/faaa4e992764eb4667b8f541dcf75ce8b7aaadcc
merge_method: model_stock
tokenizer:
source: base
chat_template: llama3
int8_mask: true
pad_to_multiple_of: 8
dtype: bfloat16
```