sequelbox's picture
card
dc962b4 verified
---
language:
- en
base_model:
- ValiantLabs/Qwen3-14B-Esper3
- ValiantLabs/Qwen3-14B-Cobalt2
- Qwen/Qwen3-14B
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- mergekit
- merge
- esper
- esper-3
- cobalt
- cobalt-2
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-14b
- 14b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- math
- math-reasoning
- math-instruct
- conversational
- chat
- instruct
---
# sequelbox/Qwen3-14B-Esper3Math
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit), combining the specialty skills of Esper 3 14b and Cobalt 2 14b.
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) as a base.
### Models Merged
The following models were included in the merge:
* [ValiantLabs/Qwen3-14B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-14B-Esper3)
* [ValiantLabs/Qwen3-14B-Cobalt2](https://huggingface.co/ValiantLabs/Qwen3-14B-Cobalt2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: della
dtype: bfloat16
parameters:
normalize: true
models:
- model: ValiantLabs/Qwen3-14B-Esper3
parameters:
density: 0.5
weight: 0.3
- model: ValiantLabs/Qwen3-14B-Cobalt2
parameters:
density: 0.5
weight: 0.25
base_model: Qwen/Qwen3-14B
```