WesPro's picture
Update README.md
f84216d verified
This is my previous Llama3 merge (OrpoSmaug-Slerp) with an extra LoRa for better RP on top.
Thanks to mradermacher, there are also GGUF quants (Q2_K-Q8_K & IQ3_XS-IQ4_XS) for this model available here: https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF
---
base_model:
- WesPro/Llama3-OrpoSmaug-Slerp-8B
- ResplendentAI/RP_Format_Llama3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [WesPro/Llama3-OrpoSmaug-Slerp-8B](https://huggingface.co/WesPro/Llama3-OrpoSmaug-Slerp-8B) + [ResplendentAI/RP_Format_Llama3](https://huggingface.co/ResplendentAI/RP_Format_Llama3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: WesPro/Llama3-OrpoSmaug-Slerp-8B+ResplendentAI/RP_Format_Llama3
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```