hybrid_r2e70 / README.md
nightingal3's picture
Upload folder using huggingface_hub
727c767 verified
---
base_model:
- synthetic-code-training/qwen25-coder-7b-loc441-514-func1183-gen480-5e-0-00005lr-bs8-bf16
- R2E-Gym/R2EGym-7B-Agent
library_name: transformers
tags:
- mergekit
- merge
---
# hybrid_r2e70
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [synthetic-code-training/qwen25-coder-7b-loc441-514-func1183-gen480-5e-0-00005lr-bs8-bf16](https://huggingface.co/synthetic-code-training/qwen25-coder-7b-loc441-514-func1183-gen480-5e-0-00005lr-bs8-bf16)
* [R2E-Gym/R2EGym-7B-Agent](https://huggingface.co/R2E-Gym/R2EGym-7B-Agent)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: synthetic-code-training/qwen25-coder-7b-loc441-514-func1183-gen480-5e-0-00005lr-bs8-bf16
parameters:
weight: 0.7
- model: R2E-Gym/R2EGym-7B-Agent
parameters:
weight: 0.3
merge_method: linear
dtype: bfloat16
```