--- base_model: [] library_name: transformers tags: - mergekit - merge --- # prototype-0.4x198 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/prototype-0.4x196 as a base. ### Models Merged The following models were included in the merge: * /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 * /workspace/cache/models--ArliAI--Llama-3.3-70B-ArliAI-RPMax-v1.4/snapshots/4288519ba279872651e29e430a85c728277cb71b * /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad - model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459 - model: /workspace/cache/models--ArliAI--Llama-3.3-70B-ArliAI-RPMax-v1.4/snapshots/4288519ba279872651e29e430a85c728277cb71b base_model: /workspace/prototype-0.4x196 merge_method: model_stock tokenizer: source: base int8_mask: true dtype: float32 out_dtype: bfloat16 ```