--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x218 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Diamond/snapshots/a7dfb66b4469a4c9ca07ff28bccc73a44797e76c as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90 * /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90 - model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c base_model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Diamond/snapshots/a7dfb66b4469a4c9ca07ff28bccc73a44797e76c merge_method: model_stock tokenizer: source: base int8_mask: true dtype: float32 out_dtype: bfloat16 ```