| license: apache-2.0 | |
| tags: | |
| - merge | |
| - mergekit | |
| - lazymergekit | |
| - Suchinthana/Sinhala-Translate-and-Dolly-Llama-7b | |
| - austinm2151/Llama2-7b-Summarizer | |
| # LLama_passthrough | |
| LLama_passthrough is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): | |
| * [Suchinthana/Sinhala-Translate-and-Dolly-Llama-7b](https://huggingface.co/Suchinthana/Sinhala-Translate-and-Dolly-Llama-7b) | |
| * [austinm2151/Llama2-7b-Summarizer](https://huggingface.co/austinm2151/Llama2-7b-Summarizer) | |
| ## 🧩 Configuration | |
| ```yaml | |
| slices: | |
| - sources: | |
| - model: Suchinthana/Sinhala-Translate-and-Dolly-Llama-7b | |
| layer_range: [0, 32] | |
| - sources: | |
| - model: austinm2151/Llama2-7b-Summarizer | |
| layer_range: [24, 32] | |
| merge_method: passthrough | |
| dtype: bfloat16 | |
| ``` |