Mark-Arcee's picture
Update README.md
e79e038 verified
---
license: apache-2.0
tags:
- merge
- mergekit
- arcee-ai/Patent-Instruct-7b
- TencentARC/LLaMA-Pro-8B-Instruct
---
# Patent-Instruct-LLaMA-Pro
Patent-Instruct-LLaMA-Pro is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [arcee-ai/Patent-Instruct-7b](https://huggingface.co/arcee-ai/Patent-Instruct-7b)
* [TencentARC/LLaMA-Pro-8B-Instruct](https://huggingface.co/TencentARC/LLaMA-Pro-8B-Instruct)
## 🧩 Configuration
```yaml
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 0
- 4
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 4
- 5
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 4
- 8
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 9
- 10
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 8
- 12
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 14
- 15
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 12
- 16
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 19
- 20
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 16
- 20
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 24
- 25
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 20
- 24
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 29
- 30
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 24
- 28
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 34
- 35
- sources:
- model: arcee-ai/Patent-Instruct-7b
layer_range:
- 28
- 32
- sources:
- model: TencentARC/LLaMA-Pro-8B-Instruct
layer_range:
- 39
- 40
```