File size: 2,854 Bytes
01b90ef
 
 
 
 
 
 
 
 
3061151
 
 
 
 
 
 
 
333aaed
 
09718c4
3b58dcb
 
 
01b90ef
09718c4
01b90ef
1f50ce7
 
 
09718c4
 
 
 
 
 
 
01b90ef
e3f4798
 
 
d329253
c427b6a
d329253
01b90ef
09718c4
 
 
01b90ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
09718c4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
base_model:
- ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
library_name: transformers
tags:
- mergekit
- merge
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- instruct
- chatml
license: apache-2.0
language:
- en
- ru
---
# DXP-Zero-V1.0-24b-Small-Instruct

Notice:
 - The model might lack the necessary evil for making story twisty or dark adventure but it make ammend on creating coherent story in long context form. Perfect for romance, adventure, sci-fi, and even general purpose.

So i was browsing for Mistral finetune and found this base [model](https://huggingface.co/ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf) by [ZeroAgency](https://huggingface.co/ZeroAgency), and oh boy... It was perfect! So here are few notable improvements i observed.

Pros:
- Increased output for storytelling or roleplay.
- Dynamic output (it can adjust how much output, i mean like when you go with shorter prompt it will do smaller outputs and so does with longer prompt more output too).
- Less repetitive (though it depends on your own prompt and settings).
- I have tested with 49444/65536 tokens no degradation although i notice it's actually learning the context better and it's impacting the output a lot. (what i don't like is, it's learning the previous context(of turns) too quickly and set it as new standards.).

Tested genres:
- Romance/Bromance
  
Added note:
I was testing using my own quantization i1-Q5-K-M. Download i1-GGUF [here](https://huggingface.co/h34v7/DXP-Zero-V1.0-24b-Small-Instruct-i1-GGUF).

## Merge Details

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

### Merge Method

This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf](https://huggingface.co/ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf) as a base.

### Models Merged

The following models were included in the merge:
* [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b)
* [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
    parameters:
      density: 0.7
      weight: 0.7
  - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
    parameters:
      density: 0.5
      weight: 0.5
      
merge_method: ties
base_model: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16
tokenizer: 
 source: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
```