--- base_model: - schonsense/70B_Chunky_stock - meta-llama/Llama-3.3-70B-Instruct - schonsense/VAR_stock library_name: transformers tags: - mergekit - merge --- # FT_BC This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [schonsense/70B_Chunky_stock](https://huggingface.co/schonsense/70B_Chunky_stock) * D:\mergekit\LORAs\applied\70B_ERA2_sun_r256 * D:\mergekit\_My_YAMLS\ERA2_stock * [schonsense/VAR_stock](https://huggingface.co/schonsense/VAR_stock) * D:\mergekit\_My_YAMLS\Book_stock ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: breadcrumbs models: - model: "D:\\mergekit\\_My_YAMLS\\ERA2_stock" parameters: gamma: 0.01 density: .4 weight: 0.2 - model: "D:\\mergekit\\_My_YAMLS\\Book_stock" parameters: gamma: 0.0 density: .6 weight: 0.3 - model: schonsense/70B_Chunky_stock parameters: gamma: 0.01 density: .3 weight: 0.2 - model: "D:\\mergekit\\LORAs\\applied\\70B_ERA2_sun_r256" parameters: gamma: 0.02 density: .3 weight: 0.2 - model: schonsense/VAR_stock parameters: gamma: 0.02 density: .5 weight: 0.1 - model: meta-llama/Llama-3.3-70B-Instruct base_model: meta-llama/Llama-3.3-70B-Instruct parameters: normalize: true int8_mask: true lambda: 1 dtype: float32 out_dtype: bfloat16 tokenizer: source: union pad_to_multiple_of: 8 ```