File size: 1,211 Bytes
ff61f10
 
 
 
 
 
 
 
 
 
 
 
 
f4a30e7
 
ff61f10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
base_model:
- huihui-ai/CodeLlama-34b-Instruct-hf-abliterated
- QuixiAI/Samantha-1.11-CodeLlama-34b
- QuixiAI/WizardLM-1.0-Uncensored-CodeLlama-34b
- oobabooga/CodeBooga-34B-v0.1
library_name: transformers
tags:
- mergekit
- merge

---
<img src=https://cdn-uploads.huggingface.co/production/uploads/65a5ad3c0b5704678a8612b9/yORb0BvaM0ZMCYzec_j9N.png>
<a href="https://www.youtube.com/watch?v=02ajBgSErIE" title="
Karsten Koch - Blue Valley" target="_blank">intro music...</a>

## Coding-34B-U6-2

Today models are trained on code so much. Have to check how some old ones fare with some assistant bits added ;P

### Models Merged

The following models were included in the merge:
* Samantha-1.11-CodeLlama-34b
* WizardLM-1.0-Uncensored-CodeLlama-34b
* CodeBooga-34B-v0.1
* CodeLlama-34b-Instruct-hf-abliterated (as base)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
name: Coding-34B-U6-2
models:
  - model: CodeLlama-34b-Instruct-hf-abliterated
  - model: Samantha-1.11-CodeLlama-34b
  - model: WizardLM-1.0-Uncensored-CodeLlama-34b
  - model: CodeBooga-34B-v0.1
base_model: CodeLlama-34b-Instruct-hf-abliterated
merge_method: model_stock
dtype: float16

```